Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Wang, Juan; Nishikawa, Robert M; Yang, Yongyi
2017-07-01
Mammograms acquired with full-field digital mammography (FFDM) systems are provided in both "for-processing'' and "for-presentation'' image formats. For-presentation images are traditionally intended for visual assessment by the radiologists. In this study, we investigate the feasibility of using for-presentation images in computerized analysis and diagnosis of microcalcification (MC) lesions. We make use of a set of 188 matched mammogram image pairs of MC lesions from 95 cases (biopsy proven), in which both for-presentation and for-processing images are provided for each lesion. We then analyze and characterize the MC lesions from for-presentation images and compare them with their counterparts in for-processing images. Specifically, we consider three important aspects in computer-aided diagnosis (CAD) of MC lesions. First, we quantify each MC lesion with a set of 10 image features of clustered MCs and 12 textural features of the lesion area. Second, we assess the detectability of individual MCs in each lesion from the for-presentation images by a commonly used difference-of-Gaussians (DoG) detector. Finally, we study the diagnostic accuracy in discriminating between benign and malignant MC lesions from the for-presentation images by a pretrained support vector machine (SVM) classifier. To accommodate the underlying background suppression and image enhancement in for-presentation images, a normalization procedure is applied. The quantitative image features of MC lesions from for-presentation images are highly consistent with that from for-processing images. The values of Pearson's correlation coefficient between features from the two formats range from 0.824 to 0.961 for the 10 MC image features, and from 0.871 to 0.963 for the 12 textural features. In detection of individual MCs, the FROC curve from for-presentation is similar to that from for-processing. In particular, at sensitivity level of 80%, the average number of false-positives (FPs) per image region is 9.55 for both for-presentation and for-processing images. Finally, for classifying MC lesions as malignant or benign, the area under the ROC curve is 0.769 in for-presentation, compared to 0.761 in for-processing (P = 0.436). The quantitative results demonstrate that MC lesions in for-presentation images are highly consistent with that in for-processing images in terms of image features, detectability of individual MCs, and classification accuracy between malignant and benign lesions. These results indicate that for-presentation images can be compatible with for-processing images for use in CAD algorithms for MC lesions. © 2017 American Association of Physicists in Medicine.
System and method for the detection of anomalies in an image
Prasad, Lakshman; Swaminarayan, Sriram
2013-09-03
Preferred aspects of the present invention can include receiving a digital image at a processor; segmenting the digital image into a hierarchy of feature layers comprising one or more fine-scale features defining a foreground object embedded in one or more coarser-scale features defining a background to the one or more fine-scale features in the segmentation hierarchy; detecting a first fine-scale foreground feature as an anomaly with respect to a first background feature within which it is embedded; and constructing an anomalous feature layer by synthesizing spatially contiguous anomalous fine-scale features. Additional preferred aspects of the present invention can include detecting non-pervasive changes between sets of images in response at least in part to one or more difference images between the sets of images.
Container Surface Evaluation by Function Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.
Textural features for radar image analysis
NASA Technical Reports Server (NTRS)
Shanmugan, K. S.; Narayanan, V.; Frost, V. S.; Stiles, J. A.; Holtzman, J. C.
1981-01-01
Texture is seen as an important spatial feature useful for identifying objects or regions of interest in an image. While textural features have been widely used in analyzing a variety of photographic images, they have not been used in processing radar images. A procedure for extracting a set of textural features for characterizing small areas in radar images is presented, and it is shown that these features can be used in classifying segments of radar images corresponding to different geological formations.
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2016-06-01
Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.
Fourier domain image fusion for differential X-ray phase-contrast breast imaging.
Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne
2017-04-01
X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.
Classification of yeast cells from image features to evaluate pathogen conditions
NASA Astrophysics Data System (ADS)
van der Putten, Peter; Bertens, Laura; Liu, Jinshuo; Hagen, Ferry; Boekhout, Teun; Verbeek, Fons J.
2007-01-01
Morphometrics from images, image analysis, may reveal differences between classes of objects present in the images. We have performed an image-features-based classification for the pathogenic yeast Cryptococcus neoformans. Building and analyzing image collections from the yeast under different environmental or genetic conditions may help to diagnose a new "unseen" situation. Diagnosis here means that retrieval of the relevant information from the image collection is at hand each time a new "sample" is presented. The basidiomycetous yeast Cryptococcus neoformans can cause infections such as meningitis or pneumonia. The presence of an extra-cellular capsule is known to be related to virulence. This paper reports on the approach towards developing classifiers for detecting potentially more or less virulent cells in a sample, i.e. an image, by using a range of features derived from the shape or density distribution. The classifier can henceforth be used for automating screening and annotating existing image collections. In addition we will present our methods for creating samples, collecting images, image preprocessing, identifying "yeast cells" and creating feature extraction from the images. We compare various expertise based and fully automated methods of feature selection and benchmark a range of classification algorithms and illustrate successful application to this particular domain.
Synthetic aperture radar target detection, feature extraction, and image formation techniques
NASA Technical Reports Server (NTRS)
Li, Jian
1994-01-01
This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.
Efficient and robust model-to-image alignment using 3D scale-invariant features.
Toews, Matthew; Wells, William M
2013-04-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.
Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features
Toews, Matthew; Wells, William M.
2013-01-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a-posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. PMID:23265799
Recognition of blurred images by the method of moments.
Flusser, J; Suk, T; Saic, S
1996-01-01
The article is devoted to the feature-based recognition of blurred images acquired by a linear shift-invariant imaging system against an image database. The proposed approach consists of describing images by features that are invariant with respect to blur and recognizing images in the feature space. The PSF identification and image restoration are not required. A set of symmetric blur invariants based on image moments is introduced. A numerical experiment is presented to illustrate the utilization of the invariants for blurred image recognition. Robustness of the features is also briefly discussed.
Singh, Anushikha; Dutta, Malay Kishore; ParthaSarathi, M; Uher, Vaclav; Burget, Radim
2016-02-01
Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
AFM feature definition for neural cells on nanofibrillar tissue scaffolds.
Tiryaki, Volkan M; Khan, Adeel A; Ayres, Virginia M
2012-01-01
A diagnostic approach is developed and implemented that provides clear feature definition in atomic force microscopy (AFM) images of neural cells on nanofibrillar tissue scaffolds. Because the cellular edges and processes are on the same order as the background nanofibers, this imaging situation presents a feature definition problem. The diagnostic approach is based on analysis of discrete Fourier transforms of standard AFM section measurements. The diagnostic conclusion that the combination of dynamic range enhancement with low-frequency component suppression enhances feature definition is shown to be correct and to lead to clear-featured images that could change previously held assumptions about the cell-cell interactions present. Clear feature definition of cells on scaffolds extends the usefulness of AFM imaging for use in regenerative medicine. © Wiley Periodicals, Inc.
Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm
2005-01-01
The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.
Chiu, Stephanie J; Toth, Cynthia A; Bowes Rickman, Catherine; Izatt, Joseph A; Farsiu, Sina
2012-05-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.
Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina
2012-01-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602
A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image
NASA Astrophysics Data System (ADS)
Su, Junying
2011-11-01
A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.
Minimizing the semantic gap in biomedical content-based image retrieval
NASA Astrophysics Data System (ADS)
Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2010-03-01
A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.
Web Image Retrieval Using Self-Organizing Feature Map.
ERIC Educational Resources Information Center
Wu, Qishi; Iyengar, S. Sitharama; Zhu, Mengxia
2001-01-01
Provides an overview of current image retrieval systems. Describes the architecture of the SOFM (Self Organizing Feature Maps) based image retrieval system, discussing the system architecture and features. Introduces the Kohonen model, and describes the implementation details of SOFM computation and its learning algorithm. Presents a test example…
Neural network-based feature point descriptors for registration of optical and SAR images
NASA Astrophysics Data System (ADS)
Abulkhanov, Dmitry; Konovalenko, Ivan; Nikolaev, Dmitry; Savchik, Alexey; Shvets, Evgeny; Sidorchuk, Dmitry
2018-04-01
Registration of images of different nature is an important technique used in image fusion, change detection, efficient information representation and other problems of computer vision. Solving this task using feature-based approaches is usually more complex than registration of several optical images because traditional feature descriptors (SIFT, SURF, etc.) perform poorly when images have different nature. In this paper we consider the problem of registration of SAR and optical images. We train neural network to build feature point descriptors and use RANSAC algorithm to align found matches. Experimental results are presented that confirm the method's effectiveness.
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
Image search engine with selective filtering and feature-element-based classification
NASA Astrophysics Data System (ADS)
Li, Qing; Zhang, Yujin; Dai, Shengyang
2001-12-01
With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.
Automatic Feature Extraction from Planetary Images
NASA Technical Reports Server (NTRS)
Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.
2010-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
Feature-Based Morphometry: Discovering Group-related Anatomical Patterns
Toews, Matthew; Wells, William; Collins, D. Louis; Arbel, Tal
2015-01-01
This paper presents feature-based morphometry (FBM), a new, fully data-driven technique for discovering patterns of group-related anatomical structure in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between subjects, FBM explicitly aims to identify distinctive anatomical patterns that may only be present in subsets of subjects, due to disease or anatomical variability. The image is modeled as a collage of generic, localized image features that need not be present in all subjects. Scale-space theory is applied to analyze image features at the characteristic scale of underlying anatomical structures, instead of at arbitrary scales such as global or voxel-level. A probabilistic model describes features in terms of their appearance, geometry, and relationship to subject groups, and is automatically learned from a set of subject images and group labels. Features resulting from learning correspond to group-related anatomical structures that can potentially be used as image biomarkers of disease or as a basis for computer-aided diagnosis. The relationship between features and groups is quantified by the likelihood of feature occurrence within a specific group vs. the rest of the population, and feature significance is quantified in terms of the false discovery rate. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer's (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and an equal error classification rate of 0.80 is achieved for subjects aged 60-80 years exhibiting mild AD (CDR=1). PMID:19853047
Hepatic CT image query using Gabor features
NASA Astrophysics Data System (ADS)
Zhao, Chenguang; Cheng, Hongyan; Zhuang, Tiange
2004-07-01
A retrieval scheme for liver computerize tomography (CT) images based on Gabor texture is presented. For each hepatic CT image, we manually delineate abnormal regions within liver area. Then, a continuous Gabor transform is utilized to analyze the texture of the pathology bearing region and extract the corresponding feature vectors. For a given sample image, we compare its feature vector with those of other images. Similar images with the highest rank are retrieved. In experiments, 45 liver CT images are collected, and the effectiveness of Gabor texture for content based retrieval is verified.
Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka
2018-01-01
A novel image processing algorithm based on a modified Bayesian residual transform (MBRT) was developed for the enhancement of morphological and vascular features in optical coherence tomography (OCT) and OCT angiography (OCTA) images. The MBRT algorithm decomposes the original OCT image into multiple residual images, where each image presents information at a unique scale. Scale selective residual adaptation is used subsequently to enhance morphological features of interest, such as blood vessels and tissue layers, and to suppress irrelevant image features such as noise and motion artefacts. The performance of the proposed MBRT algorithm was tested on a series of cross-sectional and enface OCT and OCTA images of retina and brain tissue that were acquired in-vivo. Results show that the MBRT reduces speckle noise and motion-related imaging artefacts locally, thus improving significantly the contrast and visibility of morphological features in the OCT and OCTA images. PMID:29760996
Image segmentation using association rule features.
Rushing, John A; Ranganath, Heggere; Hinke, Thomas H; Graves, Sara J
2002-01-01
A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem.
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
A Fourier-based textural feature extraction procedure
NASA Technical Reports Server (NTRS)
Stromberg, W. D.; Farr, T. G.
1986-01-01
A procedure is presented to discriminate and characterize regions of uniform image texture. The procedure utilizes textural features consisting of pixel-by-pixel estimates of the relative emphases of annular regions of the Fourier transform. The utility and derivation of the features are described through presentation of a theoretical justification of the concept followed by a heuristic extension to a real environment. Two examples are provided that validate the technique on synthetic images and demonstrate its applicability to the discrimination of geologic texture in a radar image of a tropical vegetated area.
Detection and clustering of features in aerial images by neuron network-based algorithm
NASA Astrophysics Data System (ADS)
Vozenilek, Vit
2015-12-01
The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.
Toews, Matthew; Wells, William M.; Collins, Louis; Arbel, Tal
2013-01-01
This paper presents feature-based morphometry (FBM), a new, fully data-driven technique for identifying group-related differences in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between all subjects, FBM models images as a collage of distinct, localized image features which may not be present in all subjects. FBM thus explicitly accounts for the case where the same anatomical tissue cannot be reliably identified in all subjects due to disease or anatomical variability. A probabilistic model describes features in terms of their appearance, geometry, and relationship to sub-groups of a population, and is automatically learned from a set of subject images and group labels. Features identified indicate group-related anatomical structure that can potentially be used as disease biomarkers or as a basis for computer-aided diagnosis. Scale-invariant image features are used, which reflect generic, salient patterns in the image. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer’s (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and obtains an equal error classification rate of 0.78 on new subjects. PMID:20426102
Content based image retrieval using local binary pattern operator and data mining techniques.
Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan
2015-01-01
Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.
Hierarchical content-based image retrieval by dynamic indexing and guided search
NASA Astrophysics Data System (ADS)
You, Jane; Cheung, King H.; Liu, James; Guo, Linong
2003-12-01
This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.
Terrain detection and classification using single polarization SAR
Chow, James G.; Koch, Mark W.
2016-01-19
The various technologies presented herein relate to identifying manmade and/or natural features in a radar image. Two radar images (e.g., single polarization SAR images) can be captured for a common scene. The first image is captured at a first instance and the second image is captured at a second instance, whereby the duration between the captures are of sufficient time such that temporal decorrelation occurs for natural surfaces in the scene, and only manmade surfaces, e.g., a road, produce correlated pixels. A LCCD image comprising the correlated and decorrelated pixels can be generated from the two radar images. A median image can be generated from a plurality of radar images, whereby any features in the median image can be identified. A superpixel operation can be performed on the LCCD image and the median image, thereby enabling a feature(s) in the LCCD image to be classified.
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galavis, P; Friedman, K; Chandarana, H
Purpose: Radiomics involves the extraction of texture features from different imaging modalities with the purpose of developing models to predict patient treatment outcomes. The purpose of this study is to investigate texture feature reproducibility across [18F]FDG PET/CT and [18F]FDG PET/MR imaging in patients with primary malignancies. Methods: Twenty five prospective patients with solid tumors underwent clinical [18F]FDG PET/CT scan followed by [18F]FDG PET/MR scans. In all patients the lesions were identified using nuclear medicine reports. The images were co-registered and segmented using an in-house auto-segmentation method. Fifty features, based on the intensity histogram, second and high order matrices, were extractedmore » from the segmented regions from both image data sets. One-way random-effects ANOVA model of the intra-class correlation coefficient (ICC) was used to establish texture feature correlations between both data sets. Results: Fifty features were classified based on their ICC values, which were found in the range from 0.1 to 0.86, in three categories: high, intermediate, and low. Ten features extracted from second and high-order matrices showed large ICC ≥ 0.70. Seventeen features presented intermediate 0.5 ≤ ICC ≤ 0.65 and the remaining twenty three presented low ICC ≤ 0.45. Conclusion: Features with large ICC values could be reliable candidates for quantification as they lead to similar results from both imaging modalities. Features with small ICC indicates a lack of correlation. Therefore, the use of these features as a quantitative measure will lead to different assessments of the same lesion depending on the imaging modality from where they are extracted. This study shows the importance of the need for further investigation and standardization of features across multiple imaging modalities.« less
NASA Astrophysics Data System (ADS)
DeForest, Craig; Seaton, Daniel B.; Darnell, John A.
2017-08-01
I present and demonstrate a new, general purpose post-processing technique, "3D noise gating", that can reduce image noise by an order of magnitude or more without effective loss of spatial or temporal resolution in typical solar applications.Nearly all scientific images are, ultimately, limited by noise. Noise can be direct Poisson "shot noise" from photon counting effects, or introduced by other means such as detector read noise. Noise is typically represented as a random variable (perhaps with location- or image-dependent characteristics) that is sampled once per pixel or once per resolution element of an image sequence. Noise limits many aspects of image analysis, including photometry, spatiotemporal resolution, feature identification, morphology extraction, and background modeling and separation.Identifying and separating noise from image signal is difficult. The common practice of blurring in space and/or time works because most image "signal" is concentrated in the low Fourier components of an image, while noise is evenly distributed. Blurring in space and/or time attenuates the high spatial and temporal frequencies, reducing noise at the expense of also attenuating image detail. Noise-gating exploits the same property -- "coherence" -- that we use to identify features in images, to separate image features from noise.Processing image sequences through 3-D noise gating results in spectacular (more than 10x) improvements in signal-to-noise ratio, while not blurring bright, resolved features in either space or time. This improves most types of image analysis, including feature identification, time sequence extraction, absolute and relative photometry (including differential emission measure analysis), feature tracking, computer vision, correlation tracking, background modeling, cross-scale analysis, visual display/presentation, and image compression.I will introduce noise gating, describe the method, and show examples from several instruments (including SDO/AIA , SDO/HMI, STEREO/SECCHI, and GOES-R/SUVI) that explore the benefits and limits of the technique.
Automatic Extraction of Planetary Image Features
NASA Technical Reports Server (NTRS)
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
2009-01-01
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent
2017-03-01
Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera.
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-10-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-01-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031
Feature Matching of Historical Images Based on Geometry of Quadrilaterals
NASA Astrophysics Data System (ADS)
Maiwald, F.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.
2018-05-01
This contribution shows an approach to match historical images from the photo library of the Saxon State and University Library Dresden (SLUB) in the context of a historical three-dimensional city model of Dresden. In comparison to recent images, historical photography provides diverse factors which make an automatical image analysis (feature detection, feature matching and relative orientation of images) difficult. Due to e.g. film grain, dust particles or the digitalization process, historical images are often covered by noise interfering with the image signal needed for a robust feature matching. The presented approach uses quadrilaterals in image space as these are commonly available in man-made structures and façade images (windows, stones, claddings). It is explained how to generally detect quadrilaterals in images. Consequently, the properties of the quadrilaterals as well as the relationship to neighbouring quadrilaterals are used for the description and matching of feature points. The results show that most of the matches are robust and correct but still small in numbers.
Generating description with multi-feature fusion and saliency maps of image
NASA Astrophysics Data System (ADS)
Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo
2018-04-01
Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.
Variogram-based feature extraction for neural network recognition of logos
NASA Astrophysics Data System (ADS)
Pham, Tuan D.
2003-03-01
This paper presents a new approach for extracting spatial features of images based on the theory of regionalized variables. These features can be effectively used for automatic recognition of logo images using neural networks. Experimental results on a public-domain logo database show the effectiveness of the proposed approach.
Robust image features: concentric contrasting circles and their image extraction
NASA Astrophysics Data System (ADS)
Gatrell, Lance B.; Hoff, William A.; Sklair, Cheryl W.
1992-03-01
Many computer vision tasks can be simplified if special image features are placed on the objects to be recognized. A review of special image features that have been used in the past is given and then a new image feature, the concentric contrasting circle, is presented. The concentric contrasting circle image feature has the advantages of being easily manufactured, easily extracted from the image, robust extraction (true targets are found, while few false targets are found), it is a passive feature, and its centroid is completely invariant to the three translational and one rotational degrees of freedom and nearly invariant to the remaining two rotational degrees of freedom. There are several examples of existing parallel implementations which perform most of the extraction work. Extraction robustness was measured by recording the probability of correct detection and the false alarm rate in a set of images of scenes containing mockups of satellites, fluid couplings, and electrical components. A typical application of concentric contrasting circle features is to place them on modeled objects for monocular pose estimation or object identification. This feature is demonstrated on a visually challenging background of a specular but wrinkled surface similar to a multilayered insulation spacecraft thermal blanket.
Semantics and technologies in modern design of interior stairs
NASA Astrophysics Data System (ADS)
Kukhta, M.; Sokolov, A.; Pelevin, E.
2015-10-01
Use of metal in the design of interior stairs presents new features for shaping, and can be implemented using different technologies. The article discusses the features of design and production technologies of forged metal spiral staircase considering the image semantics based on the historical and cultural heritage. To achieve the objective was applied structural- semantic method (to identify the organization of structure and semantic features of the artistic image), engineering methods (to justify the construction of the object), anthropometry method and ergonomics (to provide usability), methods of comparative analysis (to reveale the features of the way the ladder in different periods of culture). According to the research results are as follows. Was revealed the semantics influence on the design of interior staircase that is based on the World Tree image. Also was suggested rational calculation of steps to ensure the required strength. And finally was presented technology, providing the realization of the artistic image. In the practical part of the work is presented version of forged staircase.
Multi-sensor image registration based on algebraic projective invariants.
Li, Bin; Wang, Wei; Ye, Hao
2013-04-22
A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.
Associative memory model for searching an image database by image snippet
NASA Astrophysics Data System (ADS)
Khan, Javed I.; Yun, David Y.
1994-09-01
This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.
HoDOr: histogram of differential orientations for rigid landmark tracking in medical images
NASA Astrophysics Data System (ADS)
Tiwari, Abhishek; Patwardhan, Kedar Anil
2018-03-01
Feature extraction plays a pivotal role in pattern recognition and matching. An ideal feature should be invariant to image transformations such as translation, rotation, scaling, etc. In this work, we present a novel rotation-invariant feature, which is based on Histogram of Oriented Gradients (HOG). We compare performance of the proposed approach with the HOG feature on 2D phantom data, as well as 3D medical imaging data. We have used traditional histogram comparison measures such as Bhattacharyya distance and Normalized Correlation Coefficient (NCC) to assess efficacy of the proposed approach under effects of image rotation. In our experiments, the proposed feature performs 40%, 20%, and 28% better than the HOG feature on phantom (2D), Computed Tomography (CT-3D), and Ultrasound (US-3D) data for image matching, and landmark tracking tasks respectively.
Medical Image Analysis by Cognitive Information Systems - a Review.
Ogiela, Lidia; Takizawa, Makoto
2016-10-01
This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.
Grid point extraction and coding for structured light system
NASA Astrophysics Data System (ADS)
Song, Zhan; Chung, Ronald
2011-09-01
A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.
Enhancing facial features by using clear facial features
NASA Astrophysics Data System (ADS)
Rofoo, Fanar Fareed Hanna
2017-09-01
The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.
Morphological Feature Extraction for Automatic Registration of Multispectral Images
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2007-01-01
The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.
Shearlet Features for Registration of Remotely Sensed Multitemporal Images
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline
2015-01-01
We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.
Machine vision based quality inspection of flat glass products
NASA Astrophysics Data System (ADS)
Zauner, G.; Schagerl, M.
2014-03-01
This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.
Geologic interpretation of Seasat SAR imagery near the Rio Lacantum, Mexico
NASA Technical Reports Server (NTRS)
Rebillard, PH.; Dixon, T.
1984-01-01
A mosaic of the Seasat Synthetic Aperture Radar (SAR) optically processed images over Central America is presented. A SAR image of the Rio Lacantum area (southeastern Mexico) has been digitally processed and its interpretation is presented. The region is characterized by low relief and a dense vegetation canopy. Surface is believed to be indicative of subsurface structural features. The Seasat-SAR system had a steep imaging geometry (incidence angle 23 + or - 3 deg off-nadir) which is favorable for detection of subtle topographic variations. Subtle textural features in the image corresponding to surface topography were enhanced by image processing techniques. A structural and lithologic interpretation of the processed images is presented. Lineaments oriented NE-SW dominate and intersect broad folds trending NW-SE. Distinctive karst topography characterizes one high relief area
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
Quantitative imaging features: extension of the oncology medical image database
NASA Astrophysics Data System (ADS)
Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.
2015-03-01
Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.
Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms
NASA Astrophysics Data System (ADS)
Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo
This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.
Bayesian network interface for assisting radiology interpretation and education
NASA Astrophysics Data System (ADS)
Duda, Jeffrey; Botzolakis, Emmanuel; Chen, Po-Hao; Mohan, Suyash; Nasrallah, Ilya; Rauschecker, Andreas; Rudie, Jeffrey; Bryan, R. Nick; Gee, James; Cook, Tessa
2018-03-01
In this work, we present the use of Bayesian networks for radiologist decision support during clinical interpretation. This computational approach has the advantage of avoiding incorrect diagnoses that result from known human cognitive biases such as anchoring bias, framing effect, availability bias, and premature closure. To integrate Bayesian networks into clinical practice, we developed an open-source web application that provides diagnostic support for a variety of radiology disease entities (e.g., basal ganglia diseases, bone lesions). The Clinical tool presents the user with a set of buttons representing clinical and imaging features of interest. These buttons are used to set the value for each observed feature. As features are identified, the conditional probabilities for each possible diagnosis are updated in real time. Additionally, using sensitivity analysis, the interface may be set to inform the user which remaining imaging features provide maximum discriminatory information to choose the most likely diagnosis. The Case Submission tools allow the user to submit a validated case and the associated imaging features to a database, which can then be used for future tuning/testing of the Bayesian networks. These submitted cases are then reviewed by an assigned expert using the provided QC tool. The Research tool presents users with cases with previously labeled features and a chosen diagnosis, for the purpose of performance evaluation. Similarly, the Education page presents cases with known features, but provides real time feedback on feature selection.
Computer vision applications for coronagraphic optical alignment and image processing.
Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A
2013-05-10
Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.
NASA Astrophysics Data System (ADS)
Raza, Shan-e.-Ahmed; Marjan, M. Q.; Arif, Muhammad; Butt, Farhana; Sultan, Faisal; Rajpoot, Nasir M.
2015-03-01
One of the main factors for high workload in pulmonary pathology in developing countries is the relatively large proportion of tuberculosis (TB) cases which can be detected with high throughput using automated approaches. TB is caused by Mycobacterium tuberculosis, which appears as thin, rod-shaped acid-fast bacillus (AFB) in Ziehl-Neelsen (ZN) stained sputum smear samples. In this paper, we present an algorithm for automatic detection of AFB in digitized images of ZN stained sputum smear samples under a light microscope. A key component of the proposed algorithm is the enhancement of raw input image using a novel anisotropic tubular filter (ATF) which suppresses the background noise while simultaneously enhancing strong anisotropic features of AFBs present in the image. The resulting image is then segmented using color features and candidate AFBs are identified. Finally, a support vector machine classifier using morphological features from candidate AFBs decides whether a given image is AFB positive or not. We demonstrate the effectiveness of the proposed ATF method with two different feature sets by showing that the proposed image analysis pipeline results in higher accuracy and F1-score than the same pipeline with standard median filtering for image enhancement.
Automated detection of diabetic retinopathy on digital fundus images.
Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D
2002-02-01
The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.
Hemorrhage detection in MRI brain images using images features
NASA Astrophysics Data System (ADS)
Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela
2013-11-01
The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.
Fast linear feature detection using multiple directional non-maximum suppression.
Sun, C; Vallotton, P
2009-05-01
The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.
Three Dimensional Imaging with Multiple Wavelength Speckle Interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernacki, Bruce E.; Cannon, Bret D.; Schiffern, John T.
2014-05-28
We present the design, modeling, construction, and results of a three-dimensional imager based upon multiple-wavelength speckle interferometry. A surface under test is illuminated with tunable laser light in a Michelson interferometer configuration while a speckled image is acquired at each laser frequency step. The resulting hypercube is Fourier transformed in the frequency dimension and the beat frequencies that result map the relative offsets of surface features. Synthetic wavelengths resulting from the laser tuning can probe features ranging from 18 microns to hundreds of millimeters. Three dimensional images will be presented along with modeling results.
Image processing for x-ray inspection of pistachio nuts
NASA Astrophysics Data System (ADS)
Casasent, David P.
2001-03-01
A review is provided of image processing techniques that have been applied to the inspection of pistachio nuts using X-ray images. X-ray sensors provide non-destructive internal product detail not available from other sensors. The primary concern in this data is detecting the presence of worm infestations in nuts, since they have been linked to the presence of aflatoxin. We describe new techniques for segmentation, feature selection, selection of product categories (clusters), classifier design, etc. Specific novel results include: a new segmentation algorithm to produce images of isolated product items; preferable classifier operation (the classifier with the best probability of correct recognition Pc is not best); higher-order discrimination information is present in standard features (thus, high-order features appear useful); classifiers that use new cluster categories of samples achieve improved performance. Results are presented for X-ray images of pistachio nuts; however, all techniques have use in other product inspection applications.
Segmentation of prostate boundaries from ultrasound images using statistical shape model.
Shen, Dinggang; Zhan, Yiqiang; Davatzikos, Christos
2003-04-01
This paper presents a statistical shape model for the automatic prostate segmentation in transrectal ultrasound images. A Gabor filter bank is first used to characterize the prostate boundaries in ultrasound images in both multiple scales and multiple orientations. The Gabor features are further reconstructed to be invariant to the rotation of the ultrasound probe and incorporated in the prostate model as image attributes for guiding the deformable segmentation. A hierarchical deformation strategy is then employed, in which the model adaptively focuses on the similarity of different Gabor features at different deformation stages using a multiresolution technique, i.e., coarse features first and fine features later. A number of successful experiments validate the algorithm.
A blur-invariant local feature for motion blurred image matching
NASA Astrophysics Data System (ADS)
Tong, Qiang; Aoki, Terumasa
2017-07-01
Image matching between a blurred (caused by camera motion, out of focus, etc.) image and a non-blurred image is a critical task for many image/video applications. However, most of the existing local feature schemes fail to achieve this work. This paper presents a blur-invariant descriptor and a novel local feature scheme including the descriptor and the interest point detector based on moment symmetry - the authors' previous work. The descriptor is based on a new concept - center peak moment-like element (CPME) which is robust to blur and boundary effect. Then by constructing CPMEs, the descriptor is also distinctive and suitable for image matching. Experimental results show our scheme outperforms state of the art methods for blurred image matching
Lahmiri, Salim; Boukadoum, Mounir
2013-01-01
A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906
NASA Astrophysics Data System (ADS)
Davis, Paul B.; Abidi, Mongi A.
1989-05-01
PET is the only imaging modality that provides doctors with early analytic and quantitative biochemical assessment and precise localization of pathology. In PET images, boundary information as well as local pixel intensity are both crucial for manual and/or automated feature tracing, extraction, and identification. Unfortunately, the present PET technology does not provide the necessary image quality from which such precise analytic and quantitative measurements can be made. PET images suffer from significantly high levels of radial noise present in the form of streaks caused by the inexactness of the models used in image reconstruction. In this paper, our objective is to model PET noise and remove it without altering dominant features in the image. The ultimate goal here is to enhance these dominant features to allow for automatic computer interpretation and classification of PET images by developing techniques that take into consideration PET signal characteristics, data collection, and data reconstruction. We have modeled the noise steaks in PET images in both rectangular and polar representations and have shown both analytically and through computer simulation that it exhibits consistent mapping patterns. A class of filters was designed and applied successfully. Visual inspection of the filtered images show clear enhancement over the original images.
Retinal status analysis method based on feature extraction and quantitative grading in OCT images.
Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri
2016-07-22
Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.
Peng, Fei; Li, Jiao-ting; Long, Min
2015-03-01
To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.
Girelli, Carlos Magno Alves
2016-05-01
Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Color image definition evaluation method based on deep learning method
NASA Astrophysics Data System (ADS)
Liu, Di; Li, YingChun
2018-01-01
In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.
NASA Astrophysics Data System (ADS)
Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki
2017-09-01
Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.
Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L
2016-07-01
Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text
Principal curve detection in complicated graph images
NASA Astrophysics Data System (ADS)
Liu, Yuncai; Huang, Thomas S.
2001-09-01
Finding principal curves in an image is an important low level processing in computer vision and pattern recognition. Principal curves are those curves in an image that represent boundaries or contours of objects of interest. In general, a principal curve should be smooth with certain length constraint and allow either smooth or sharp turning. In this paper, we present a method that can efficiently detect principal curves in complicated map images. For a given feature image, obtained from edge detection of an intensity image or thinning operation of a pictorial map image, the feature image is first converted to a graph representation. In graph image domain, the operation of principal curve detection is performed to identify useful image features. The shortest path and directional deviation schemes are used in our algorithm os principal verve detection, which is proven to be very efficient working with real graph images.
Early melanoma diagnosis with mobile imaging.
Do, Thanh-Toan; Zhou, Yiren; Zheng, Haitian; Cheung, Ngai-Man; Koh, Dawn
2014-01-01
We research a mobile imaging system for early diagnosis of melanoma. Different from previous work, we focus on smartphone-captured images, and propose a detection system that runs entirely on the smartphone. Smartphone-captured images taken under loosely-controlled conditions introduce new challenges for melanoma detection, while processing performed on the smartphone is subject to computation and memory constraints. To address these challenges, we propose to localize the skin lesion by combining fast skin detection and fusion of two fast segmentation results. We propose new features to capture color variation and border irregularity which are useful for smartphone-captured images. We also propose a new feature selection criterion to select a small set of good features used in the final lightweight system. Our evaluation confirms the effectiveness of proposed algorithms and features. In addition, we present our system prototype which computes selected visual features from a user-captured skin lesion image, and analyzes them to estimate the likelihood of malignance, all on an off-the-shelf smartphone.
Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki
2015-03-10
This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.
Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki
2015-01-01
This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645
Cluster compression algorithm: A joint clustering/data compression concept
NASA Technical Reports Server (NTRS)
Hilbert, E. E.
1977-01-01
The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.
Comparison of Texture Features Used for Classification of Life Stages of Malaria Parasite.
Bairagi, Vinayak K; Charpe, Kshipra C
2016-01-01
Malaria is a vector borne disease widely occurring at equatorial region. Even after decades of campaigning of malaria control, still today it is high mortality causing disease due to improper and late diagnosis. To prevent number of people getting affected by malaria, the diagnosis should be in early stage and accurate. This paper presents an automatic method for diagnosis of malaria parasite in the blood images. Image processing techniques are used for diagnosis of malaria parasite and to detect their stages. The diagnosis of parasite stages is done using features like statistical features and textural features of malaria parasite in blood images. This paper gives a comparison of the textural based features individually used and used in group together. The comparison is made by considering the accuracy, sensitivity, and specificity of the features for the same images in database.
The correlation study of parallel feature extractor and noise reduction approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewi, Deshinta Arrova; Sundararajan, Elankovan; Prabuwono, Anton Satria
2015-05-15
This paper presents literature reviews that show variety of techniques to develop parallel feature extractor and finding its correlation with noise reduction approaches for low light intensity images. Low light intensity images are normally displayed as darker images and low contrast. Without proper handling techniques, those images regularly become evidences of misperception of objects and textures, the incapability to section them. The visual illusions regularly clues to disorientation, user fatigue, poor detection and classification performance of humans and computer algorithms. Noise reduction approaches (NR) therefore is an essential step for other image processing steps such as edge detection, image segmentation,more » image compression, etc. Parallel Feature Extractor (PFE) meant to capture visual contents of images involves partitioning images into segments, detecting image overlaps if any, and controlling distributed and redistributed segments to extract the features. Working on low light intensity images make the PFE face challenges and closely depend on the quality of its pre-processing steps. Some papers have suggested many well established NR as well as PFE strategies however only few resources have suggested or mentioned the correlation between them. This paper reviews best approaches of the NR and the PFE with detailed explanation on the suggested correlation. This finding may suggest relevant strategies of the PFE development. With the help of knowledge based reasoning, computational approaches and algorithms, we present the correlation study between the NR and the PFE that can be useful for the development and enhancement of other existing PFE.« less
Research on simulated infrared image utility evaluation using deep representation
NASA Astrophysics Data System (ADS)
Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin
2018-01-01
Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.
NASA Astrophysics Data System (ADS)
Soltanian-Zadeh, Hamid; Windham, Joe P.; Peck, Donald J.
1997-04-01
This paper presents development and performance evaluation of an MRI feature space method. The method is useful for: identification of tissue types; segmentation of tissues; and quantitative measurements on tissues, to obtain information that can be used in decision making (diagnosis, treatment planning, and evaluation of treatment). The steps of the work accomplished are as follows: (1) Four T2-weighted and two T1-weighted images (before and after injection of Gadolinium) were acquired for ten tumor patients. (2) Images were analyed by two image analysts according to the following algorithm. The intracranial brain tissues were segmented from the scalp and background. The additive noise was suppressed using a multi-dimensional non-linear edge- preserving filter which preserves partial volume information on average. Image nonuniformities were corrected using a modified lowpass filtering approach. The resulting images were used to generate and visualize an optimal feature space. Cluster centers were identified on the feature space. Then images were segmented into normal tissues and different zones of the tumor. (3) Biopsy samples were extracted from each patient and were subsequently analyzed by the pathology laboratory. (4) Image analysis results were compared to each other and to the biopsy results. Pre- and post-surgery feature spaces were also compared. The proposed algorithm made it possible to visualize the MRI feature space and to segment the image. In all cases, the operators were able to find clusters for normal and abnormal tissues. Also, clusters for different zones of the tumor were found. Based on the clusters marked for each zone, the method successfully segmented the image into normal tissues (white matter, gray matter, and CSF) and different zones of the lesion (tumor, cyst, edema, radiation necrosis, necrotic core, and infiltrated tumor). The results agreed with those obtained from the biopsy samples. Comparison of pre- to post-surgery and radiation feature spaces confirmed that the tumor was not present in the second study but radiation necrosis was generated as a result of radiation.
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Mobile object retrieval in server-based image databases
NASA Astrophysics Data System (ADS)
Manger, D.; Pagel, F.; Widak, H.
2013-05-01
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.
Registration of opthalmic images using control points
NASA Astrophysics Data System (ADS)
Heneghan, Conor; Maguire, Paul
2003-03-01
A method for registering pairs of digital ophthalmic images of the retina is presented using anatomical features as control points present in both images. The anatomical features chosen are blood vessel crossings and bifurcations. These control points are identified by a combination of local contrast enhancement, and morphological processing. In general, the matching between control points is unknown, however, so an automated algorithm is used to determine the matching pairs of control points in the two images as follows. Using two control points from each image, rigid global transform (RGT) coefficients are calculated for all possible combinations of control point pairs, and the set of RGT coefficients is identified. Once control point pairs are established, registration of two images can be achieved by using linear regression to optimize an RGT, bilinear or second order polynomial global transform. An example of cross-modal image registration using an optical image and a fluorescein angiogram of an eye is presented to illustrate the technique.
2010-01-01
Background Change blindness refers to a failure to detect changes between consecutively presented images separated by, for example, a brief blank screen. As an explanation of change blindness, it has been suggested that our representations of the environment are sparse outside focal attention and even that changed features may not be represented at all. In order to find electrophysiological evidence of neural representations of changed features during change blindness, we recorded event-related potentials (ERPs) in adults in an oddball variant of the change blindness flicker paradigm. Methods ERPs were recorded when subjects performed a change detection task in which the modified images were infrequently interspersed (p = .2) among the frequently (p = .8) presented unmodified images. Responses to modified and unmodified images were compared in the time window of 60-100 ms after stimulus onset. Results ERPs to infrequent modified images were found to differ in amplitude from those to frequent unmodified images at the midline electrodes (Fz, Pz, Cz and Oz) at the latency of 60-100 ms even when subjects were unaware of changes (change blindness). Conclusions The results suggest that the brain registers changes very rapidly, and that changed features in images are neurally represented even without participants' ability to report them. PMID:20181126
Retinal image quality assessment based on image clarity and content
NASA Astrophysics Data System (ADS)
Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim
2016-09-01
Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.
NASA Astrophysics Data System (ADS)
Shinde, Anant; Perinchery, Sandeep Menon; Murukeshan, Vadakke Matham
2017-04-01
An optical imaging probe with targeted multispectral and spatiotemporal illumination features has applications in many diagnostic biomedical studies. However, these systems are mostly adapted in conventional microscopes, limiting their use for in vitro applications. We present a variable resolution imaging probe using a digital micromirror device (DMD) with an achievable maximum lateral resolution of 2.7 μm and an axial resolution of 5.5 μm, along with precise shape selective targeted illumination ability. We have demonstrated switching of different wavelengths to image multiple regions in the field of view. Moreover, the targeted illumination feature allows enhanced image contrast by time averaged imaging of selected regions with different optical exposure. The region specific multidirectional scanning feature of this probe has facilitated high speed targeted confocal imaging.
Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
NASA Astrophysics Data System (ADS)
Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
Computer aided detection of tumor and edema in brain FLAIR magnetic resonance image using ANN
NASA Astrophysics Data System (ADS)
Pradhan, Nandita; Sinha, A. K.
2008-03-01
This paper presents an efficient region based segmentation technique for detecting pathological tissues (Tumor & Edema) of brain using fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. This work segments FLAIR brain images for normal and pathological tissues based on statistical features and wavelet transform coefficients using k-means algorithm. The image is divided into small blocks of 4×4 pixels. The k-means algorithm is used to cluster the image based on the feature vectors of blocks forming different classes representing different regions in the whole image. With the knowledge of the feature vectors of different segmented regions, supervised technique is used to train Artificial Neural Network using fuzzy back propagation algorithm (FBPA). Segmentation for detecting healthy tissues and tumors has been reported by several researchers by using conventional MRI sequences like T1, T2 and PD weighted sequences. This work successfully presents segmentation of healthy and pathological tissues (both Tumors and Edema) using FLAIR images. At the end pseudo coloring of segmented and classified regions are done for better human visualization.
NASA Astrophysics Data System (ADS)
Klomp, Sander; van der Sommen, Fons; Swager, Anne-Fré; Zinger, Svitlana; Schoon, Erik J.; Curvers, Wouter L.; Bergman, Jacques J.; de With, Peter H. N.
2017-03-01
Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett's Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
A practical salient region feature based 3D multi-modality registration method for medical images
NASA Astrophysics Data System (ADS)
Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang
2006-03-01
We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.
Larue, Ruben T H M; Defraene, Gilles; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter
2017-02-01
Quantitative analysis of tumour characteristics based on medical imaging is an emerging field of research. In recent years, quantitative imaging features derived from CT, positron emission tomography and MR scans were shown to be of added value in the prediction of outcome parameters in oncology, in what is called the radiomics field. However, results might be difficult to compare owing to a lack of standardized methodologies to conduct quantitative image analyses. In this review, we aim to present an overview of the current challenges, technical routines and protocols that are involved in quantitative imaging studies. The first issue that should be overcome is the dependency of several features on the scan acquisition and image reconstruction parameters. Adopting consistent methods in the subsequent target segmentation step is evenly crucial. To further establish robust quantitative image analyses, standardization or at least calibration of imaging features based on different feature extraction settings is required, especially for texture- and filter-based features. Several open-source and commercial software packages to perform feature extraction are currently available, all with slightly different functionalities, which makes benchmarking quite challenging. The number of imaging features calculated is typically larger than the number of patients studied, which emphasizes the importance of proper feature selection and prediction model-building routines to prevent overfitting. Even though many of these challenges still need to be addressed before quantitative imaging can be brought into daily clinical practice, radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Enhanced facial recognition for thermal imagery using polarimetric imaging.
Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W
2014-07-01
We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.
Comparing experts and novices in Martian surface feature change detection and identification
NASA Astrophysics Data System (ADS)
Wardlaw, Jessica; Sprinks, James; Houghton, Robert; Muller, Jan-Peter; Sidiropoulos, Panagiotis; Bamford, Steven; Marsh, Stuart
2018-02-01
Change detection in satellite images is a key concern of the Earth Observation field for environmental and climate change monitoring. Satellite images also provide important clues to both the past and present surface conditions of other planets, which cannot be validated on the ground. With the volume of satellite imagery continuing to grow, the inadequacy of computerised solutions to manage and process imagery to the required professional standard is of critical concern. Whilst studies find the crowd sourcing approach suitable for the counting of impact craters in single images, images of higher resolution contain a much wider range of features, and the performance of novices in identifying more complex features and detecting change, remains unknown. This paper presents a first step towards understanding whether novices can identify and annotate changes in different geomorphological features. A website was developed to enable visitors to flick between two images of the same location on Mars taken at different times and classify 1) if a surface feature changed and if so, 2) what feature had changed from a pre-defined list of six. Planetary scientists provided ;expert; data against which classifications made by novices could be compared when the project subsequently went public. Whilst no significant difference was found in images identified with surface changes by expert and novices, results exhibited differences in consensus within and between experts and novices when asked to classify the type of change. Experts demonstrated higher levels of agreement in classification of changes as dust devil tracks, slope streaks and impact craters than other features, whilst the consensus of novices was consistent across feature types; furthermore, the level of consensus amongst regardless of feature type. These trends are secondary to the low levels of consensus found, regardless of feature type or classifier expertise. These findings demand the attention of researchers who want to use crowd-sourcing for similar scientific purposes, particularly for the supervised training of computer algorithms, and inform the scope and design of future projects.
ERIC Educational Resources Information Center
Gerber, Andrew J.; Peterson, Bradley S.
2008-01-01
The article helps to understand the interpretation of an image by presenting as to what constitutes an image. A common feature in all images is the basic physical structure that can be described with a common set of terms.
Abd El Aziz, Mohamed; Selim, I M; Xiong, Shengwu
2017-06-30
This paper presents a new approach for the automatic detection of galaxy morphology from datasets based on an image-retrieval approach. Currently, there are several classification methods proposed to detect galaxy types within an image. However, in some situations, the aim is not only to determine the type of galaxy within the queried image, but also to determine the most similar images for query image. Therefore, this paper proposes an image-retrieval method to detect the type of galaxies within an image and return with the most similar image. The proposed method consists of two stages, in the first stage, a set of features is extracted based on shape, color and texture descriptors, then a binary sine cosine algorithm selects the most relevant features. In the second stage, the similarity between the features of the queried galaxy image and the features of other galaxy images is computed. Our experiments were performed using the EFIGI catalogue, which contains about 5000 galaxies images with different types (edge-on spiral, spiral, elliptical and irregular). We demonstrate that our proposed approach has better performance compared with the particle swarm optimization (PSO) and genetic algorithm (GA) methods.
Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging
NASA Astrophysics Data System (ADS)
Lin, Bingxiong; Sun, Yu; Qian, Xiaoning
2013-03-01
Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.
Super-resolution method for face recognition using nonlinear mappings on coherent features.
Huang, Hua; He, Huiting
2011-01-01
Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.
BCC skin cancer diagnosis based on texture analysis techniques
NASA Astrophysics Data System (ADS)
Chuang, Shao-Hui; Sun, Xiaoyan; Chang, Wen-Yu; Chen, Gwo-Shing; Huang, Adam; Li, Jiang; McKenzie, Frederic D.
2011-03-01
In this paper, we present a texture analysis based method for diagnosing the Basal Cell Carcinoma (BCC) skin cancer using optical images taken from the suspicious skin regions. We first extracted the Run Length Matrix and Haralick texture features from the images and used a feature selection algorithm to identify the most effective feature set for the diagnosis. We then utilized a Multi-Layer Perceptron (MLP) classifier to classify the images to BCC or normal cases. Experiments showed that detecting BCC cancer based on optical images is feasible. The best sensitivity and specificity we achieved on our data set were 94% and 95%, respectively.
Automatic brain MR image denoising based on texture feature-based artificial neural networks.
Chang, Yu-Ning; Chang, Herng-Hua
2015-01-01
Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.
Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William
2018-06-04
The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.
A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.
Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang
2011-07-01
The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.
Unsupervised Feature Learning With Winner-Takes-All Based STDP
Ferré, Paul; Mamalet, Franck; Thorpe, Simon J.
2018-01-01
We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods. PMID:29674961
Noor, Siti Salwa Md; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang
2017-11-16
In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.
Sensor-oriented feature usability evaluation in fingerprint segmentation
NASA Astrophysics Data System (ADS)
Li, Ying; Yin, Yilong; Yang, Gongping
2013-06-01
Existing fingerprint segmentation methods usually process fingerprint images captured by different sensors with the same feature or feature set. We propose to improve the fingerprint segmentation result in view of an important fact that images from different sensors have different characteristics for segmentation. Feature usability evaluation, which means to evaluate the usability of features to find the personalized feature or feature set for different sensors to improve the performance of segmentation. The need for feature usability evaluation for fingerprint segmentation is raised and analyzed as a new issue. To address this issue, we present a decision-tree-based feature-usability evaluation method, which utilizes a C4.5 decision tree algorithm to evaluate and pick the best suitable feature or feature set for fingerprint segmentation from a typical candidate feature set. We apply the novel method on the FVC2002 database of fingerprint images, which are acquired by four different respective sensors and technologies. Experimental results show that the accuracy of segmentation is improved, and time consumption for feature extraction is dramatically reduced with selected feature(s).
McKnight, Colin D; Kelly, Aine M; Petrou, Myria; Nidecker, Anna E; Lorincz, Matthew T; Altaee, Duaa K; Gebarski, Stephen S; Foerster, Bradley
2017-06-01
Infectious encephalitis is a relatively common cause of morbidity and mortality. Treatment of infectious encephalitis with antiviral medication can be highly effective when administered promptly. Clinical mimics of encephalitis arise from a broad range of pathologic processes, including toxic, metabolic, neoplastic, autoimmune, and cardiovascular etiologies. These mimics need to be rapidly differentiated from infectious encephalitis to appropriately manage the correct etiology; however, the many overlapping signs of these various entities present a challenge to accurate diagnosis. A systematic approach that considers both the clinical manifestations and the imaging findings of infectious encephalitis and its mimics can contribute to more accurate and timely diagnosis. Following an institutional review board approval, a health insurance portability and accountability act (HIPAA)-compliant search of our institutional imaging database (teaching files) was conducted to generate a list of adult and pediatric patients who presented between January 1, 1995 and October 10, 2013 for imaging to evaluate possible cases of encephalitis. Pertinent medical records, including clinical notes as well as surgical and pathology reports, were reviewed and correlated with imaging findings. Clinical and imaging findings were combined to generate useful flowcharts designed to assist in distinguishing infectious encephalitis from its mimics. Key imaging features were reviewed and were placed in the context of the provided flowcharts. Four flowcharts were presented based on the primary anatomic site of imaging abnormality: group 1: temporal lobe; group 2: cerebral cortex; group 3: deep gray matter; and group 4: white matter. An approach that combines features on clinical presentation was then detailed. Imaging examples were used to demonstrate similarities and key differences. Early recognition of infectious encephalitis is critical, but can be quite complex due to diverse pathologies and overlapping features. Synthesis of both the clinical and imaging features of infectious encephalitis and its mimics is critical to a timely and accurate diagnosis. The use of the flowcharts presented in this article can further enable both clinicians and radiologists to more confidently differentiate encephalitis from its mimics and improve patient care. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
Classification of human carcinoma cells using multispectral imagery
NASA Astrophysics Data System (ADS)
Ćinar, Umut; Y. Ćetin, Yasemin; Ćetin-Atalay, Rengul; Ćetin, Enis
2016-03-01
In this paper, we present a technique for automatically classifying human carcinoma cell images using textural features. An image dataset containing microscopy biopsy images from different patients for 14 distinct cancer cell line type is studied. The images are captured using a RGB camera attached to an inverted microscopy device. Texture based Gabor features are extracted from multispectral input images. SVM classifier is used to generate a descriptive model for the purpose of cell line classification. The experimental results depict satisfactory performance, and the proposed method is versatile for various microscopy magnification options.
Chan, K C; Pharoah, M; Lee, L; Weinreb, I; Perez-Ordonez, B
2013-01-01
The purpose of this case series is to present the common features of intraosseous mucoepidermoid carcinoma (IMC) of the jaws in plain film and CT imaging. Two oral and maxillofacial radiologists reviewed and characterized the common features of four biopsy-proven cases of IMC in the jaws in plain film and CT imaging obtained from the files of the Department of Oral Radiology, Faculty of Dentistry, University of Toronto, Toronto, Canada. The common features are a well-defined sclerotic periphery, the presence of internal amorphous sclerotic bone and numerous small loculations, lack of septae bordering many of the loculations, and expansion and perforation of the outer cortical plate with extension into surrounding soft tissue. Other characteristics include tooth displacement and root resorption. The four cases of IMC reviewed have common imaging characteristics. All cases share some diagnostic imaging features with other multilocular-appearing entities of the jaws. However, the presence of amorphous sclerotic bone and malignant characteristics can be useful in the differential diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.
Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
Spinal infections: clinical and imaging features.
Arbelaez, Andres; Restrepo, Feliza; Castillo, Mauricio
2014-10-01
Spinal infections represent a group of rare conditions affecting vertebral bodies, intervertebral discs, paraspinal soft tissues, epidural space, meninges, and spinal cord. The causal factors, clinical presentations, and imaging features are a challenge because the difficulty to differentiate them from other conditions, such as degenerative and inflammatory disorders and spinal neoplasm. They require early recognition because delay diagnosis, imaging, and intervention may have devastating consequences especially in children and the elderly. This article reviews the most common spinal infections, their pathophysiologic, clinical manifestation, and their imaging findings.
Lele, Ramachandra Dattatraya; Joshi, Mukund; Chowdhary, Abhay
2014-01-01
The preliminary study presented within this paper shows a comparative study of various texture features extracted from liver ultrasonic images by employing Multilayer Perceptron (MLP), a type of artificial neural network, to study the presence of disease conditions. An ultrasound (US) image shows echo-texture patterns, which defines the organ characteristics. Ultrasound images of liver disease conditions such as “fatty liver,” “cirrhosis,” and “hepatomegaly” produce distinctive echo patterns. However, various ultrasound imaging artifacts and speckle noise make these echo-texture patterns difficult to identify and often hard to distinguish visually. Here, based on the extracted features from the ultrasonic images, we employed an artificial neural network for the diagnosis of disease conditions in liver and finding of the best classifier that distinguishes between abnormal and normal conditions of the liver. Comparison of the overall performance of all the feature classifiers concluded that “mixed feature set” is the best feature set. It showed an excellent rate of accuracy for the training data set. The gray level run length matrix (GLRLM) feature shows better results when the network was tested against unknown data. PMID:25332717
Using an image-extended relational database to support content-based image retrieval in a PACS.
Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M
2005-12-01
This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.
Lender, Anja; Meule, Adrian; Rinck, Mike; Brockmeyer, Timo; Blechert, Jens
2018-06-01
Strong implicit responses to food have evolved to avoid energy depletion but contribute to overeating in today's affluent environments. The Approach-Avoidance Task (AAT) supposedly assesses implicit biases in response to food stimuli: Participants push pictures on a monitor "away" or pull them "near" with a joystick that controls a corresponding image zoom. One version of the task couples movement direction with image content-independent features, for example, pulling blue-framed images and pushing green-framed images regardless of content ('irrelevant feature version'). However, participants might selectively attend to this feature and ignore image content and, thus, such a task setup might underestimate existing biases. The present study tested this attention account by comparing two irrelevant feature versions of the task with either a more peripheral (image frame color: green vs. blue) or central (small circle vs. cross overlaid over the image content) image feature as response instruction to a 'relevant feature version', in which participants responded to the image content, thus making it impossible to ignore that content. Images of chocolate-containing foods and of objects were used, and several trait and state measures were acquired to validate the obtained biases. Results revealed a robust approach bias towards food only in the relevant feature condition. Interestingly, a positive correlation with state chocolate craving during the task was found when all three conditions were combined, indicative of criterion validity of all three versions. However, no correlations were found with trait chocolate craving. Results provide a strong case for the relevant feature version of the AAT for bias measurement. They also point to several methodological avenues for future research around selective attention in the irrelevant versions and task validity regarding trait vs. state variables. Copyright © 2018 Elsevier Ltd. All rights reserved.
Clinics in diagnostic imaging (179). Severe rhabdomyolysis complicated by myonecrosis.
Kok, Shi Xian Shawn; Tan, Tien Jin
2017-08-01
A 32-year-old man presented to the emergency department with severe right lower limb pain and swelling of three days' duration. He had multiple prior admissions for recurrent seizures and suicide attempts. Markedly elevated serum creatine kinase levels and urine myoglobinuria were consistent with a diagnosis of rhabdomyolysis. Initial magnetic resonance imaging of the right lower limb revealed diffuse muscle oedema and features of myositis in the gluteal muscles and the adductor, anterior and posterior compartments of the thigh. Follow-up magnetic resonance imaging performed 11 days later showed interval development of areas of myonecrosis and haemorrhage. The causes, clinical presentation and imaging features of rhabdomyolysis are discussed. Copyright: © Singapore Medical Association.
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
Classification of large-scale fundus image data sets: a cloud-computing framework.
Roychowdhury, Sohini
2016-08-01
Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.
Multi-Sensor Registration of Earth Remotely Sensed Imagery
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)
2001-01-01
Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).
Segmentation of prostate biopsy needles in transrectal ultrasound images
NASA Astrophysics Data System (ADS)
Krefting, Dagmar; Haupt, Barbara; Tolxdorff, Thomas; Kempkensteffen, Carsten; Miller, Kurt
2007-03-01
Prostate cancer is the most common cancer in men. Tissue extraction at different locations (biopsy) is the gold-standard for diagnosis of prostate cancer. These biopsies are commonly guided by transrectal ultrasound imaging (TRUS). Exact location of the extracted tissue within the gland is desired for more specific diagnosis and provides better therapy planning. While the orientation and the position of the needle within clinical TRUS image are limited, the appearing length and visibility of the needle varies strongly. Marker lines are present and tissue inhomogeneities and deflection artefacts may appear. Simple intensity, gradient oder edge-detecting based segmentation methods fail. Therefore a multivariate statistical classificator is implemented. The independent feature model is built by supervised learning using a set of manually segmented needles. The feature space is spanned by common binary object features as size and eccentricity as well as imaging-system dependent features like distance and orientation relative to the marker line. The object extraction is done by multi-step binarization of the region of interest. The ROI is automatically determined at the beginning of the segmentation and marker lines are removed from the images. The segmentation itself is realized by scale-invariant classification using maximum likelihood estimation and Mahalanobis distance as discriminator. The technique presented here could be successfully applied in 94% of 1835 TRUS images from 30 tissue extractions. It provides a robust method for biopsy needle localization in clinical prostate biopsy TRUS images.
Machine learning to analyze images of shocked materials for precise and accurate measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dresselhaus-Cooper, Leora; Howard, Marylesa; Hock, Margaret C.
A supervised machine learning algorithm, called locally adaptive discriminant analysis (LADA), has been developed to locate boundaries between identifiable image features that have varying intensities. LADA is an adaptation of image segmentation, which includes techniques that find the positions of image features (classes) using statistical intensity distributions for each class in the image. In order to place a pixel in the proper class, LADA considers the intensity at that pixel and the distribution of intensities in local (nearby) pixels. This paper presents the use of LADA to provide, with statistical uncertainties, the positions and shapes of features within ultrafast imagesmore » of shock waves. We demonstrate the ability to locate image features including crystals, density changes associated with shock waves, and material jetting caused by shock waves. This algorithm can analyze images that exhibit a wide range of physical phenomena because it does not rely on comparison to a model. LADA enables analysis of images from shock physics with statistical rigor independent of underlying models or simulations.« less
Ensemble methods with simple features for document zone classification
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing
2012-01-01
Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.
NASA Astrophysics Data System (ADS)
Soltanian-Zadeh, Hamid; Windham, Joe P.
1992-04-01
Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apte, A; Veeraraghavan, H; Oh, J
Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less
NASA Astrophysics Data System (ADS)
Zhongqin, G.; Chen, Y.
2017-12-01
Abstract Quickly identify the spatial distribution of landslides automatically is essential for the prevention, mitigation and assessment of the landslide hazard. It's still a challenging job owing to the complicated characteristics and vague boundary of the landslide areas on the image. The high resolution remote sensing image has multi-scales, complex spatial distribution and abundant features, the object-oriented image classification methods can make full use of the above information and thus effectively detect the landslides after the hazard happened. In this research we present a new semi-supervised workflow, taking advantages of recent object-oriented image analysis and machine learning algorithms to quick locate the different origins of landslides of some areas on the southwest part of China. Besides a sequence of image segmentation, feature selection, object classification and error test, this workflow ensemble the feature selection and classifier selection. The feature this study utilized were normalized difference vegetation index (NDVI) change, textural feature derived from the gray level co-occurrence matrices (GLCM), spectral feature and etc. The improvement of this study shows this algorithm significantly removes some redundant feature and the classifiers get fully used. All these improvements lead to a higher accuracy on the determination of the shape of landslides on the high resolution remote sensing image, in particular the flexibility aimed at different kinds of landslides.
FEX: A Knowledge-Based System For Planimetric Feature Extraction
NASA Astrophysics Data System (ADS)
Zelek, John S.
1988-10-01
Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.
Understanding Deep Representations Learned in Modeling Users Likes.
Guntuku, Sharath Chandra; Zhou, Joey Tianyi; Roy, Sujoy; Lin, Weisi; Tsang, Ivor W
2016-08-01
Automatically understanding and discriminating different users' liking for an image is a challenging problem. This is because the relationship between image features (even semantic ones extracted by existing tools, viz., faces, objects, and so on) and users' likes is non-linear, influenced by several subtle factors. This paper presents a deep bi-modal knowledge representation of images based on their visual content and associated tags (text). A mapping step between the different levels of visual and textual representations allows for the transfer of semantic knowledge between the two modalities. Feature selection is applied before learning deep representation to identify the important features for a user to like an image. The proposed representation is shown to be effective in discriminating users based on images they like and also in recommending images that a given user likes, outperforming the state-of-the-art feature representations by ∼ 15 %-20%. Beyond this test-set performance, an attempt is made to qualitatively understand the representations learned by the deep architecture used to model user likes.
Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.
2011-01-01
Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960
Knowledge Driven Image Mining with Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj
2004-01-01
This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.
Knowledge Driven Image Mining with Mixture Density Mercer Kernals
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj
2004-01-01
This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.
Radiological and endoscopic imaging methods in the management of cystic pancreatic neoplasms.
Aslan, Ahmet; Inan, Ibrahim; Orman, Süleyman; Aslan, Mine; Acar, Murat
2017-01-01
The management of cystic pancreatic neoplasm (CPN) is a clinical dilemma because of its clinical presentations and malignant potential. Surgery is the best treatment choice ; however, pancreatic surgery still has high complication rates, even in experienced centers. Imaging methods have a definitive role in the management of CPN and computed tomography, magnetic resonance imaging, and endoscopic ultrasonography are the preferred methods since they can reveal the suspicious features for malignancy. Therefore, radiologists, gastroenterologists, endoscopists, and surgeons should be aware of the common features of CPN, its discrete presentations on imaging methods, and the limitations of these modalities in the management of the disease. This study aims to review the radiological and endoscopic imaging methods used for the management of CPN. © Acta Gastro-Enterologica Belgica.
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
NASA Astrophysics Data System (ADS)
Paino, A.; Keller, J.; Popescu, M.; Stone, K.
2014-06-01
In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.
Automated image segmentation-assisted flattening of atomic force microscopy images.
Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin
2018-01-01
Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.
A change detection method for remote sensing image based on LBP and SURF feature
NASA Astrophysics Data System (ADS)
Hu, Lei; Yang, Hao; Li, Jin; Zhang, Yun
2018-04-01
Finding the change in multi-temporal remote sensing image is important in many the image application. Because of the infection of climate and illumination, the texture of the ground object is more stable relative to the gray in high-resolution remote sensing image. And the texture features of Local Binary Patterns (LBP) and Speeded Up Robust Features (SURF) are outstanding in extracting speed and illumination invariance. A method of change detection for matched remote sensing image pair is present, which compares the similarity by LBP and SURF to detect the change and unchanged of the block after blocking the image. And region growing is adopted to process the block edge zone. The experiment results show that the method can endure some illumination change and slight texture change of the ground object.
Bilinear Convolutional Neural Networks for Fine-grained Visual Recognition.
Lin, Tsung-Yu; RoyChowdhury, Aruni; Maji, Subhransu
2017-07-04
We present a simple and effective architecture for fine-grained recognition called Bilinear Convolutional Neural Networks (B-CNNs). These networks represent an image as a pooled outer product of features derived from two CNNs and capture localized feature interactions in a translationally invariant manner. B-CNNs are related to orderless texture representations built on deep features but can be trained in an end-to-end manner. Our most accurate model obtains 84.1%, 79.4%, 84.5% and 91.3% per-image accuracy on the Caltech-UCSD birds [66], NABirds [63], FGVC aircraft [42], and Stanford cars [33] dataset respectively and runs at 30 frames-per-second on a NVIDIA Titan X GPU. We then present a systematic analysis of these networks and show that (1) the bilinear features are highly redundant and can be reduced by an order of magnitude in size without significant loss in accuracy, (2) are also effective for other image classification tasks such as texture and scene recognition, and (3) can be trained from scratch on the ImageNet dataset offering consistent improvements over the baseline architecture. Finally, we present visualizations of these models on various datasets using top activations of neural units and gradient-based inversion techniques. The source code for the complete system is available at http://vis-www.cs.umass.edu/bcnn.
Galavis, Paulina E; Hollensen, Christian; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert
2010-10-01
Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [(18)F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation.
GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT
2014-01-01
Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation. PMID:20831489
Histopathological Image Classification using Discriminative Feature-oriented Dictionary Learning
Vu, Tiep Huu; Mousavi, Hojjat Seyed; Monga, Vishal; Rao, Ganesh; Rao, UK Arvind
2016-01-01
In histopathological image analysis, feature extraction for classification is a challenging task due to the diversity of histology features suitable for each problem as well as presence of rich geometrical structures. In this paper, we propose an automatic feature discovery framework via learning class-specific dictionaries and present a low-complexity method for classification and disease grading in histopathology. Essentially, our Discriminative Feature-oriented Dictionary Learning (DFDL) method learns class-specific dictionaries such that under a sparsity constraint, the learned dictionaries allow representing a new image sample parsimoniously via the dictionary corresponding to the class identity of the sample. At the same time, the dictionary is designed to be poorly capable of representing samples from other classes. Experiments on three challenging real-world image databases: 1) histopathological images of intraductal breast lesions, 2) mammalian kidney, lung and spleen images provided by the Animal Diagnostics Lab (ADL) at Pennsylvania State University, and 3) brain tumor images from The Cancer Genome Atlas (TCGA) database, reveal the merits of our proposal over state-of-the-art alternatives. Moreover, we demonstrate that DFDL exhibits a more graceful decay in classification accuracy against the number of training images which is highly desirable in practice where generous training is often not available. PMID:26513781
Stereoscopic Feature Tracking System for Retrieving Velocity of Surface Waters
NASA Astrophysics Data System (ADS)
Zuniga Zamalloa, C. C.; Landry, B. J.
2017-12-01
The present work is concerned with the surface velocity retrieval of flows using a stereoscopic setup and finding the correspondence in the images via feature tracking (FT). The feature tracking provides a key benefit of substantially reducing the level of user input. In contrast to other commonly used methods (e.g., normalized cross-correlation), FT does not require the user to prescribe interrogation window sizes and removes the need for masking when specularities are present. The results of the current FT methodology are comparable to those obtained via Large Scale Particle Image Velocimetry while requiring little to no user input which allowed for rapid, automated processing of imagery.
Highway 3D model from image and lidar data
NASA Astrophysics Data System (ADS)
Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan
2014-05-01
We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Md Noor, Siti Salwa; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang
2017-01-01
In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability. PMID:29144388
The 3-D image recognition based on fuzzy neural network technology
NASA Technical Reports Server (NTRS)
Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei
1993-01-01
Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.
Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R
2015-10-01
This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.
Rahman, Md. Mahmudur; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-01-01
Abstract. This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term “concept” refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature. PMID:26730398
Granulomatous mastitis: changing clinical and imaging features with image-guided biopsy correlation.
Handa, Priyanka; Leibman, A Jill; Sun, Derek; Abadi, Maria; Goldberg, Aryeh
2014-10-01
To review clinical presentation, revisit patient demographics and imaging findings in granulomatous mastitis and determine the optimal biopsy method for diagnosis. A retrospective study was performed to review the clinical presentation, imaging findings and biopsy methods in patients with granulomatous mastitis. Twenty-seven patients with pathology-proven granulomatous mastitis were included. The average age at presentation was 38.0 years (range, 21-73 years). Seven patients were between 48 and 73 years old. Twenty-four patients presented with symptoms and three patients were asymptomatic. Nineteen patients were imaged with mammography demonstrating mammographically occult lesions as the predominant finding. Twenty-six patients were imaged with ultrasound and the most common finding was a mass lesion. Pathological diagnosis was made by image-guided biopsy in 44 % of patients. The imaging features of granulomatous mastitis on mammography are infrequently described. Our study demonstrates that granulomatous mastitis can occur in postmenopausal or asymptomatic patients, although previously reported exclusively in young women with palpable findings. Presentation on mammography as calcifications requiring mammographically guided vacuum-assisted biopsy has not been previously described. The diagnosis of granulomatous mastitis can easily be made by image-guided biopsy and surgical excision should be reserved for definitive treatment. • Characterizes radiographic appearance of granulomatous mastitis in postmenopausal or asymptomatic patients. • Granulomatous mastitis can present exclusively as calcifications on mammography. • The diagnosis of granulomatous mastitis is made by image-guided biopsy techniques.
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
Land use classification using texture information in ERTS-A MSS imagery
NASA Technical Reports Server (NTRS)
Haralick, R. M. (Principal Investigator); Shanmugam, K. S.; Bosley, R.
1973-01-01
The author has identified the following significant results. Preliminary digital analysis of ERTS-1 MSS imagery reveals that the textural features of the imagery are very useful for land use classification. A procedure for extracting the textural features of ERTS-1 imagery is presented and the results of a land use classification scheme based on the textural features are also presented. The land use classification algorithm using textural features was tested on a 5100 square mile area covered by part of an ERTS-1 MSS band 5 image over the California coastline. The image covering this area was blocked into 648 subimages of size 8.9 square miles each. Based on a color composite of the image set, a total of 7 land use categories were identified. These land use categories are: coastal forest, woodlands, annual grasslands, urban areas, large irrigated fields, small irrigated fields, and water. The automatic classifier was trained to identify the land use categories using only the textural characteristics of the subimages; 75 percent of the subimages were assigned correct identifications. Since texture and spectral features provide completely different kinds of information, a significant increase in identification accuracy will take place when both features are used together.
Categorizing biomedicine images using novel image features and sparse coding representation
2013-01-01
Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are of the type "others". A serial of experimental results are obtained. Firstly, each image categorizing results is presented, and next image categorizing performance indexes such as precision, recall, F-score, are all listed. Different features which include conventional image features and our proposed novel features indicate different categorizing performance, and the results are demonstrated. Thirdly, we conduct an accuracy comparison between support vector machine classification method and our proposed sparse representation classification method. At last, our proposed approach is compared with three peer classification method and experimental results verify our impressively improved performance. Conclusions Compared with conventional image features that do not exploit characteristics regarding text positions and distributions inside images embedded in biomedical publications, our proposed image features coupled with the SR based representation model exhibit superior performance for classifying biomedical images as demonstrated in our comparative benchmark study. PMID:24565470
Al-Shaikhli, Saif Dawood Salman; Yang, Michael Ying; Rosenhahn, Bodo
2016-12-01
This paper presents a novel method for Alzheimer's disease classification via an automatic 3D caudate nucleus segmentation. The proposed method consists of segmentation and classification steps. In the segmentation step, we propose a novel level set cost function. The proposed cost function is constrained by a sparse representation of local image features using a dictionary learning method. We present coupled dictionaries: a feature dictionary of a grayscale brain image and a label dictionary of a caudate nucleus label image. Using online dictionary learning, the coupled dictionaries are learned from the training data. The learned coupled dictionaries are embedded into a level set function. In the classification step, a region-based feature dictionary is built. The region-based feature dictionary is learned from shape features of the caudate nucleus in the training data. The classification is based on the measure of the similarity between the sparse representation of region-based shape features of the segmented caudate in the test image and the region-based feature dictionary. The experimental results demonstrate the superiority of our method over the state-of-the-art methods by achieving a high segmentation (91.5%) and classification (92.5%) accuracy. In this paper, we find that the study of the caudate nucleus atrophy gives an advantage over the study of whole brain structure atrophy to detect Alzheimer's disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.
Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe
2018-06-02
This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.
2016-10-18
Pluto's present, hazy atmosphere is almost entirely free of clouds, though scientists from NASA's New Horizons mission have identified some cloud candidates after examining images taken by the New Horizons Long Range Reconnaissance Imager and Multispectral Visible Imaging Camera, during the spacecraft's July 2015 flight through the Pluto system. All are low-lying, isolated small features -- no broad cloud decks or fields -- and while none of the features can be confirmed with stereo imaging, scientists say they are suggestive of possible, rare condensation clouds. http://photojournal.jpl.nasa.gov/catalog/PIA21127
Sonographic Findings in Necrotizing Fasciitis: Two Ends of the Spectrum.
Shyy, William; Knight, Roneesha S; Goldstein, Ruth; Isaacs, Eric D; Teismann, Nathan A
2016-10-01
Necrotizing fasciitis is a rare but serious disease, and early diagnosis is essential to reducing its substantial morbidity and mortality. The 2 cases presented show that the key clinical and radiographic features of necrotizing fasciitis exist along a continuum of severity at initial presentation; thus, this diagnosis should not be prematurely ruled out in cases that do not show the dramatic features familiar to most clinicians. Although computed tomography and magnetic resonance imaging are considered the most effective imaging modalities, the cases described here illustrate how sonography should be recommended as an initial imaging test to make a rapid diagnosis and initiate therapy.
Axelrod, David E.; Miller, Naomi A.; Lickley, H. Lavina; Qian, Jin; Christens-Barry, William A.; Yuan, Yan; Fu, Yuejiao; Chapman, Judith-Anne W.
2008-01-01
Background Nuclear grade has been associated with breast DCIS recurrence and progression to invasive carcinoma; however, our previous study of a cohort of patients with breast DCIS did not find such an association with outcome. Fifty percent of patients had heterogeneous DCIS with more than one nuclear grade. The aim of the current study was to investigate the effect of quantitative nuclear features assessed with digital image analysis on ipsilateral DCIS recurrence. Methods Hematoxylin and eosin stained slides for a cohort of 80 patients with primary breast DCIS were reviewed and two fields with representative grade (or grades) were identified by a Pathologist and simultaneously used for acquisition of digital images for each field. Van Nuys worst nuclear grade was assigned, as was predominant grade, and heterogeneous grading when present. Patients were grouped by heterogeneity of their nuclear grade: Group A: nuclear grade 1 only, nuclear grades 1 and 2, or nuclear grade 2 only (32 patients), Group B: nuclear grades 1, 2 and 3, or nuclear grades 2 and 3 (31 patients), Group 3: nuclear grade 3 only (17 patients). Nuclear fine structure was assessed by software which captured thirty-nine nuclear feature values describing nuclear morphometry, densitometry, and texture. Step-wise forward Cox regressions were performed with previous clinical and pathologic factors, and the new image analysis features. Results Duplicate measurements were similar for 89.7% to 97.4% of assessed image features. The rate of correct classification of nuclear grading with digital image analysis features was similar in the two fields, and pooled assessment across both fields. In the pooled assessment, a discriminant function with one nuclear morphometric and one texture feature was significantly (p = 0.001) associated with nuclear grading, and provided correct jackknifed classification of a patient’s nuclear grade for Group A (78.1%), Group B (48.4%), and Group C (70.6%). The factors significantly associated with DCIS recurrence were those previously found, type of initial presentation (p = 0.03) and amount of parenchymal involvement (p = 0.05), along with the morphometry image feature of ellipticity (p = 0.04). Conclusion Analysis of nuclear features measured by image cytometry may contribute to the classification and prognosis of breast DCIS patients with more than one nuclear grade. PMID:18779878
Voxel classification based airway tree segmentation
NASA Astrophysics Data System (ADS)
Lo, Pechin; de Bruijne, Marleen
2008-03-01
This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, B; Tan, Y; Tsai, W
2014-06-15
Purpose: Radiogenomics promises the ability to study cancer tumor genotype from the phenotype obtained through radiographic imaging. However, little attention has been paid to the sensitivity of image features, the image-based biomarkers, to imaging acquisition techniques. This study explores the impact of CT dose, slice thickness and reconstruction algorithm on measuring image features using a thorax phantom. Methods: Twentyfour phantom lesions of known volume (1 and 2mm), shape (spherical, elliptical, lobular and spicular) and density (-630, -10 and +100 HU) were scanned on a GE VCT at four doses (25, 50, 100, and 200 mAs). For each scan, six imagemore » series were reconstructed at three slice thicknesses of 5, 2.5 and 1.25mm with continuous intervals, using the lung and standard reconstruction algorithms. The lesions were segmented with an in-house 3D algorithm. Fifty (50) image features representing lesion size, shape, edge, and density distribution/texture were computed. Regression method was employed to analyze the effect of CT dose, slice of thickness and reconstruction algorithm on these features adjusting 3 confounding factors (size, density and shape of phantom lesions). Results: The coefficients of CT dose, slice thickness and reconstruction algorithm are presented in Table 1 in the supplementary material. No significant difference was found between the image features calculated on low dose CT scans (25mAs and 50mAs). About 50% texture features were found statistically different between low doses and high doses (100 and 200mAs). Significant differences were found for almost all features when calculated on 1.25mm, 2.5mm, and 5mm slice thickness images. Reconstruction algorithms significantly affected all density-based image features, but not morphological features. Conclusions: There is a great need to standardize the CT imaging protocols for radiogenomics study because CT dose, slice thickness and reconstruction algorithm impact quantitative image features to various degrees as our study has shown.« less
Image-optimized Coronal Magnetic Field Models
NASA Astrophysics Data System (ADS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-08-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.
Image-Optimized Coronal Magnetic Field Models
NASA Technical Reports Server (NTRS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-01-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
Clinical imaging of the pancreas
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, G.; Gardiner, R.
1987-01-01
Featuring more than 300 high-quality radiographs and scan images, clinical imaging of the pancreas systematically reviews all appropriate imaging modalities for diagnosing and evaluating a variety of commonly encountered pancreatic disorders. After presenting a succinct overview of pancreatic embryology, anatomy, and physiology, the authors establish the clinical indications-including postoperative patient evaluation-for radiologic examination of the pancreas. The diagnostic capabilities and limitations of currently available imaging techniques for the pancreas are thoroughly assessed, with carefully selected illustrations depicting the types of images and data obtained using these different techniques. The review of acute and chronic pancreatitis considers the clinical features andmore » possible complications of their variant forms and offers guidance in selecting appropriate imaging studies.« less
Microscopic medical image classification framework via deep learning and shearlet transform.
Rezaeilouyeh, Hadi; Mollahosseini, Ali; Mahoor, Mohammad H
2016-10-01
Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape. An alternative approach is to use convolutional neural networks (CNNs) to learn the most appropriate feature abstractions directly from the data and handle the limitations of hand-crafted features. A framework for breast cancer detection and prostate Gleason grading using CNN trained on images along with the magnitude and phase of shearlet coefficients is presented. Particularly, we apply shearlet transform on images and extract the magnitude and phase of shearlet coefficients. Then we feed shearlet features along with the original images to our CNN consisting of multiple layers of convolution, max pooling, and fully connected layers. Our experiments show that using the magnitude and phase of shearlet coefficients as extra information to the network can improve the accuracy of detection and generalize better compared to the state-of-the-art methods that rely on hand-crafted features. This study expands the application of deep neural networks into the field of medical image analysis, which is a difficult domain considering the limited medical data available for such analysis.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
Single molecule image formation, reconstruction and processing: introduction.
Ashok, Amit; Piestun, Rafael; Stallinga, Sjoerd
2016-07-01
The ability to image at the single molecule scale has revolutionized research in molecular biology. This feature issue presents a collection of articles that provides new insights into the fundamental limits of single molecule imaging and reports novel techniques for image formation and analysis.
Crystallization mosaic effect generation by superpixels
NASA Astrophysics Data System (ADS)
Xie, Yuqi; Bo, Pengbo; Yuan, Ye; Wang, Kuanquan
2015-03-01
Art effect generation from digital images using computational tools has been a hot research topic in recent years. We propose a new method for generating crystallization mosaic effects from color images. Two key problems in generating pleasant mosaic effect are studied: grouping pixels into mosaic tiles and arrangement of mosaic tiles adapting to image features. To give visually pleasant mosaic effect, we propose to create mosaic tiles by pixel clustering in feature space of color information, taking compactness of tiles into consideration as well. Moreover, we propose a method for processing feature boundaries in images which gives guidance for arranging mosaic tiles near image features. This method gives nearly uniform shape of mosaic tiles, adapting to feature lines in an esthetic way. The new approach considers both color distance and Euclidean distance of pixels, and thus is capable of giving mosaic tiles in a more pleasing manner. Some experiments are included to demonstrate the computational efficiency of the present method and its capability of generating visually pleasant mosaic tiles. Comparisons with existing approaches are also included to show the superiority of the new method.
NASA Astrophysics Data System (ADS)
Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.
2010-03-01
Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.
Khotanlou, Hassan; Afrasiabi, Mahlagha
2012-10-01
This paper presents a new feature selection approach for automatically extracting multiple sclerosis (MS) lesions in three-dimensional (3D) magnetic resonance (MR) images. Presented method is applicable to different types of MS lesions. In this method, T1, T2, and fluid attenuated inversion recovery (FLAIR) images are firstly preprocessed. In the next phase, effective features to extract MS lesions are selected by using a genetic algorithm (GA). The fitness function of the GA is the Similarity Index (SI) of a support vector machine (SVM) classifier. The results obtained on different types of lesions have been evaluated by comparison with manual segmentations. This algorithm is evaluated on 15 real 3D MR images using several measures. As a result, the SI between MS regions determined by the proposed method and radiologists was 87% on average. Experiments and comparisons with other methods show the effectiveness and the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Villano, Michelangelo; Papathanassiou, Konstantinos P.
2011-03-01
The estimation of the local differential shift between synthetic aperture radar (SAR) images has proven to be an effective technique for monitoring glacier surface motion. As images acquired over glaciers by short wavelength SAR systems, such as TerraSAR-X, often suffer from a lack of coherence, image features have to be exploited for the shift estimation (feature-tracking).The present paper addresses feature-tracking with special attention to the feasibility requirements and the achievable accuracy of the shift estimation. In particular, the dependence of the performance on image characteristics, such as texture parameters, signal-to-noise ratio (SNR) and resolution, as well as on processing techniques (despeckling, normalised cross-correlation versus maximum likelihood estimation) is analysed by means of Monte-Carlo simulations. TerraSAR-X data acquired over the Helheim glacier, Greenland, and the Aletsch glacier, Switzerland, have been processed to validate the simulation results.Feature-tracking can benefit of the availability of fully-polarimetric data. As some image characteristics, in fact, are polarisation-dependent, the selection of an optimum polarisation leads to improved performance. Furthermore, fully-polarimetric SAR images can be despeckled without degrading the resolution, so that additional (smaller-scale) features can be exploited.
Thermal feature extraction of servers in a datacenter using thermal image registration
NASA Astrophysics Data System (ADS)
Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan
2017-09-01
Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.
Image-based overlay measurement using subsurface ultrasonic resonance force microscopy
NASA Astrophysics Data System (ADS)
Tamer, M. S.; van der Lans, M. J.; Sadeghian, H.
2018-03-01
Image Based Overlay (IBO) measurement is one of the most common techniques used in Integrated Circuit (IC) manufacturing to extract the overlay error values. The overlay error is measured using dedicated overlay targets which are optimized to increase the accuracy and the resolution, but these features are much larger than the IC feature size. IBO measurements are realized on the dedicated targets instead of product features, because the current overlay metrology solutions, mainly based on optics, cannot provide sufficient resolution on product features. However, considering the fact that the overlay error tolerance is approaching 2 nm, the overlay error measurement on product features becomes a need for the industry. For sub-nanometer resolution metrology, Scanning Probe Microscopy (SPM) is widely used, though at the cost of very low throughput. The semiconductor industry is interested in non-destructive imaging of buried structures under one or more layers for the application of overlay and wafer alignment, specifically through optically opaque media. Recently an SPM technique has been developed for imaging subsurface features which can be potentially considered as a solution for overlay metrology. In this paper we present the use of Subsurface Ultrasonic Resonance Force Microscopy (SSURFM) used for IBO measurement. We used SSURFM for imaging the most commonly used overlay targets on a silicon substrate and photoresist. As a proof of concept we have imaged surface and subsurface structures simultaneously. The surface and subsurface features of the overlay targets are fabricated with programmed overlay errors of +/-40 nm, +/-20 nm, and 0 nm. The top layer thickness changes between 30 nm and 80 nm. Using SSURFM the surface and subsurface features were successfully imaged and the overlay errors were extracted, via a rudimentary image processing algorithm. The measurement results are in agreement with the nominal values of the programmed overlay errors.
An effective detection algorithm for region duplication forgery in digital images
NASA Astrophysics Data System (ADS)
Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin
2016-04-01
Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.
What's color got to do with it? The influence of color on visual attention in different categories.
Frey, Hans-Peter; Honey, Christian; König, Peter
2008-10-23
Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention.
Prostate cancer detection: Fusion of cytological and textural features.
Nguyen, Kien; Jain, Anil K; Sabata, Bikash
2011-01-01
A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification.
Prostate cancer detection: Fusion of cytological and textural features
Nguyen, Kien; Jain, Anil K.; Sabata, Bikash
2011-01-01
A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification. PMID:22811959
A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks
Wang, Changjian; Liu, Xiaohui; Jin, Shiyao
2018-01-01
Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
GAFFE: a gaze-attentive fixation finding engine.
Rajashekar, U; van der Linde, I; Bovik, A C; Cormack, L K
2008-04-01
The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Contrast-bandpass showed the greatest difference between human and random fixations, followed by luminance-bandpass, RMS contrast, and luminance. Using these measurements, we present a new algorithm that selects image regions as likely candidates for fixation. These regions are shown to correlate well with fixations recorded from human observers.
Comparative study on the performance of textural image features for active contour segmentation.
Moraru, Luminita; Moldovanu, Simona
2012-07-01
We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.
Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A
2017-07-01
Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY.
Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang
2009-08-07
This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application.
SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY
Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang
2010-01-01
This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application. PMID:21197416
Manichon, Anne-Frédérique; Bancel, Brigitte; Durieux-Millon, Marion; Ducerf, Christian; Mabrut, Jean-Yves; Lepogam, Marie-Annick; Rode, Agnès
2012-01-01
Purpose. To review the contrast-enhanced ultrasonographic (CEUS) and magnetic resonance (MR) imaging findings in 25 patients with 26 hepatocellular adenomas (HCAs) and to compare imaging features with histopathologic results from resected specimen considering the new immunophenotypical classification. Material and Methods. Two abdominal radiologists reviewed retrospectively CEUS cineloops and MR images in 26 HCA. All pathological specimens were reviewed and classified into four subgroups (steatotic or HNF 1α mutated, inflammatory, atypical or β-catenin mutated, and unspecified). Inflammatory infiltrates were scored, steatosis, and telangiectasia semiquantitatively evaluated. Results. CEUS and MRI features are well correlated: among the 16 inflammatory HCA, 7/16 presented typical imaging features: hypersignal T2, strong arterial enhancement with a centripetal filling, persistent on delayed phase. 6 HCA were classified as steatotic with typical imaging features: a drop out signal, slight arterial enhancement, vanishing on late phase. Four HCA were classified as atypical with an HCC developed in one. Five lesions displayed important steatosis (>50%) without belonging to the HNF1α group. Conclusion. In half cases, inflammatory HCA have specific imaging features well correlated with the amount of telangiectasia and inflammatory infiltrates. An HCA with important amount of steatosis noticed on chemical shift images does not always belong to the HNF1α group. PMID:22811588
McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne
2018-04-01
Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hierarchical Feature Extraction With Local Neural Response for Image Recognition.
Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P
2013-04-01
In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
He, Honghui; Dong, Yang; Zhou, Jialing; Ma, Hui
2017-03-01
As one of the salient features of light, polarization contains abundant structural and optical information of media. Recently, as a comprehensive description of polarization property, the Mueller matrix polarimetry has been applied to various biomedical studies such as cancerous tissues detections. In previous works, it has been found that the structural information encoded in the 2D Mueller matrix images can be presented by other transformed parameters with more explicit relationship to certain microstructural features. In this paper, we present a statistical analyzing method to transform the 2D Mueller matrix images into frequency distribution histograms (FDHs) and their central moments to reveal the dominant structural features of samples quantitatively. The experimental results of porcine heart, intestine, stomach, and liver tissues demonstrate that the transformation parameters and central moments based on the statistical analysis of Mueller matrix elements have simple relationships to the dominant microstructural properties of biomedical samples, including the density and orientation of fibrous structures, the depolarization power, diattenuation and absorption abilities. It is shown in this paper that the statistical analysis of 2D images of Mueller matrix elements may provide quantitative or semi-quantitative criteria for biomedical diagnosis.
NASA Astrophysics Data System (ADS)
Orton, Glenn S.; Hansen, Candice; Caplinger, Michael; Ravine, Michael; Atreya, Sushil; Ingersoll, Andrew P.; Jensen, Elsa; Momary, Thomas; Lipkaman, Leslie; Krysak, Daniel; Zimdar, Robert; Bolton, Scott
2017-05-01
During Juno's first perijove encounter, the JunoCam instrument acquired the first images of Jupiter's polar regions at 50-70 km spatial scale at low emission angles. Poleward of 64-68° planetocentric latitude, where Jupiter's east-west banded structure breaks down, several types of discrete features appear on a darker background. Cyclonic oval features are clustered near both poles. Other oval-shaped features are also present, ranging in size from 2000 km down to JunoCam's resolution limits. The largest and brightest features often have chaotic shapes. Two narrow linear features in the north, associated with an overlying haze feature, traverse tens of degrees of longitude. JunoCam also detected an optically thin cloud or haze layer past the northern nightside terminator estimated to be 58 ± 21 km (approximately three scale heights) above the main cloud deck. JunoCam will acquire polar images on every perijove, allowing us to track the state and evolution of longer-lived features.
Nonlinear features for classification and pose estimation of machined parts from single views
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-10-01
A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.
Final Cassini RADAR Observation of Titan's Magic Island Region and Ligeia Mare
NASA Astrophysics Data System (ADS)
Hofgartner, J. D.; Hayes, A.; Lunine, J. I.; Stiles, B. W.; Malaska, M. J.; Wall, S. D.
2017-12-01
Cassini arrived in the Saturn system shortly after the Oct. 2002 northern winter solstice and the mission will end shortly after the May 2017 northern summer solstice. A main objective of the Cassini Solstice mission is to study seasonal and temporal changes and at Titan this includes changes of the hydrocarbon lakes/seas. Titan's Magic Islands are transient bright features in the north polar sea, Ligeia Mare that were observed to be temporal changes in Cassini RADAR images. The Magic Islands were discovered in a July 2013 image as anomalously bright features that were not present in four previous observations from Feb. 2007 - May 2013. The region of the Magic Islands was again anomalously bright in an Aug. 2014 image and the total areal extent of the anomalously bright region had increased by more than a factor of three. The transient features were not, however, observed in a Jan. 2015 image. Thus in seven observations spanning much of the Cassini mission the bright features were observed to appear, increase in extent, and then disappear. They are referred to as Titan's Magic Islands because of their appearing/disappearing behavior and resemblance in appearance to islands. These transient bright features are not actually islands. The transients were concluded to be most consistent with waves, floating solids, suspended solids, and bubbles. Tides, sea level changes, and seafloor changes are unlikely to be the primary cause of these temporal changes. Whether these temporal changes are also seasonal changes was unclear. The final Cassini RADAR imaging observation of Titan in Apr. 2017 included the region of the Magic Islands. The transient bright features were not present during this observation. The geometry of the observation was such that, had the transients been present, their brightness may have ruled out some of the remaining hypotheses. Their absence however, is less constraining but consistent with their transient nature. Waves, floating solids, suspended solids, and bubbles remain the most likely hypotheses. Other regions of Ligeia Mare were also imaged in the Apr. 2017 observation and no transient features were observed elsewhere in the sea. The specific process responsible for these transient features and the role of seasonal changes in their appearance and disappearance remains an open research question.
Developing a radiomics framework for classifying non-small cell lung carcinoma subtypes
NASA Astrophysics Data System (ADS)
Yu, Dongdong; Zang, Yali; Dong, Di; Zhou, Mu; Gevaert, Olivier; Fang, Mengjie; Shi, Jingyun; Tian, Jie
2017-03-01
Patient-targeted treatment of non-small cell lung carcinoma (NSCLC) has been well documented according to the histologic subtypes over the past decade. In parallel, recent development of quantitative image biomarkers has recently been highlighted as important diagnostic tools to facilitate histological subtype classification. In this study, we present a radiomics analysis that classifies the adenocarcinoma (ADC) and squamous cell carcinoma (SqCC). We extract 52-dimensional, CT-based features (7 statistical features and 45 image texture features) to represent each nodule. We evaluate our approach on a clinical dataset including 324 ADCs and 110 SqCCs patients with CT image scans. Classification of these features is performed with four different machine-learning classifiers including Support Vector Machines with Radial Basis Function kernel (RBF-SVM), Random forest (RF), K-nearest neighbor (KNN), and RUSBoost algorithms. To improve the classifiers' performance, optimal feature subset is selected from the original feature set by using an iterative forward inclusion and backward eliminating algorithm. Extensive experimental results demonstrate that radiomics features achieve encouraging classification results on both complete feature set (AUC=0.89) and optimal feature subset (AUC=0.91).
Automatic Sea Bird Detection from High Resolution Aerial Imagery
NASA Astrophysics Data System (ADS)
Mader, S.; Grenzdörffer, G. J.
2016-06-01
Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.
Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina
2016-12-01
Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
Analyzing Sub-Classifications of Glaucoma via SOM Based Clustering of Optic Nerve Images.
Yan, Sanjun; Abidi, Syed Sibte Raza; Artes, Paul Habib
2005-01-01
We present a data mining framework to cluster optic nerve images obtained by Confocal Scanning Laser Tomography (CSLT) in normal subjects and patients with glaucoma. We use self-organizing maps and expectation maximization methods to partition the data into clusters that provide insights into potential sub-classification of glaucoma based on morphological features. We conclude that our approach provides a first step towards a better understanding of morphological features in optic nerve images obtained from glaucoma patients and healthy controls.
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, Sophie; Pinto, Frank M; Morin-Ducote, Garnetta
2013-01-01
Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADsmore » images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.« less
Laparoscopic optical coherence tomography imaging of human ovarian cancer
Hariri, Lida P.; Bonnema, Garret T.; Schmidt, Kathy; Winkler, Amy M.; Korde, Vrushali; Hatch, Kenneth D.; Davis, John R.; Brewer, Molly A.; Barton, Jennifer K.
2011-01-01
Objectives Ovarian cancer is the fourth leading cause of cancer-related death among women in the US largely due to late detection secondary to unreliable symptomology and screening tools without adequate resolution. Optical coherence tomography (OCT) is a recently emerging imaging modality with promise in ovarian cancer diagnostics, providing non-destructive subsurface imaging at imaging depths up to 2 mm with near-histological grade resolution (10–20 μm). In this study, we developed the first ever laparoscopic OCT (LOCT) device, evaluated the safety and feasibility of LOCT, and characterized the microstructural features of human ovaries in vivo. Methods A custom LOCT device was fabricated specifically for laparoscopic imaging of the ovaries in patients undergoing oophorectomy. OCT images were compared with histopathology to identify preliminary architectural imaging features of normal and pathologic ovarian tissue. Results Thirty ovaries in 17 primarily peri or post-menopausal women were successfully imaged with LOCT: 16 normal, 5 endometriosis, 3 serous cystadenoma, and 4 adenocarcinoma. Preliminary imaging features developed for each category reveal qualitative differences in the homogeneous character of normal post-menopausal ovary, the ability to image small subsurface inclusion cysts, and distinguishable features for endometriosis, cystadenoma, and adenocarcinoma. Conclusions We present the development and successful implementation of the first laparoscopic OCT probe. Comparison of OCT images and corresponding histopathology allowed for the description of preliminary microstructural features for normal ovary, endometriosis, and benign and malignant surface epithelial neoplasms. These results support the potential of OCT both as a diagnostic tool and imaging modality for further evaluation of ovarian cancer pathogenesis. PMID:19481241
Measurement of glucose concentration by image processing of thin film slides
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Heavner, David
2012-02-01
Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.
Iris recognition using image moments and k-means algorithm.
Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed
2014-01-01
This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.
Iris Recognition Using Image Moments and k-Means Algorithm
Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed
2014-01-01
This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221
The value of nodal information in predicting lung cancer relapse using 4DPET/4DCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Heyse, E-mail: heyse.li@mail.utoronto.ca; Becker, Nathan; Raman, Srinivas
2015-08-15
Purpose: There is evidence that computed tomography (CT) and positron emission tomography (PET) imaging metrics are prognostic and predictive in nonsmall cell lung cancer (NSCLC) treatment outcomes. However, few studies have explored the use of standardized uptake value (SUV)-based image features of nodal regions as predictive features. The authors investigated and compared the use of tumor and node image features extracted from the radiotherapy target volumes to predict relapse in a cohort of NSCLC patients undergoing chemoradiation treatment. Methods: A prospective cohort of 25 patients with locally advanced NSCLC underwent 4DPET/4DCT imaging for radiation planning. Thirty-seven image features were derivedmore » from the CT-defined volumes and SUVs of the PET image from both the tumor and nodal target regions. The machine learning methods of logistic regression and repeated stratified five-fold cross-validation (CV) were used to predict local and overall relapses in 2 yr. The authors used well-known feature selection methods (Spearman’s rank correlation, recursive feature elimination) within each fold of CV. Classifiers were ranked on their Matthew’s correlation coefficient (MCC) after CV. Area under the curve, sensitivity, and specificity values are also presented. Results: For predicting local relapse, the best classifier found had a mean MCC of 0.07 and was composed of eight tumor features. For predicting overall relapse, the best classifier found had a mean MCC of 0.29 and was composed of a single feature: the volume greater than 0.5 times the maximum SUV (N). Conclusions: The best classifier for predicting local relapse had only tumor features. In contrast, the best classifier for predicting overall relapse included a node feature. Overall, the methods showed that nodes add value in predicting overall relapse but not local relapse.« less
Coherent Image Layout using an Adaptive Visual Vocabulary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillard, Scott E.; Henry, Michael J.; Bohn, Shawn J.
When querying a huge image database containing millions of images, the result of the query may still contain many thousands of images that need to be presented to the user. We consider the problem of arranging such a large set of images into a visually coherent layout, one that places similar images next to each other. Image similarity is determined using a bag-of-features model, and the layout is constructed from a hierarchical clustering of the image set by mapping an in-order traversal of the hierarchy tree into a space-filling curve. This layout method provides strong locality guarantees so we aremore » able to quantitatively evaluate performance using standard image retrieval benchmarks. Performance of the bag-of-features method is best when the vocabulary is learned on the image set being clustered. Because learning a large, discriminative vocabulary is a computationally demanding task, we present a novel method for efficiently adapting a generic visual vocabulary to a particular dataset. We evaluate our clustering and vocabulary adaptation methods on a variety of image datasets and show that adapting a generic vocabulary to a particular set of images improves performance on both hierarchical clustering and image retrieval tasks.« less
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Sousa, Daniel; Small, Christopher
2018-02-14
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area - despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system.
Small, Christopher
2018-01-01
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area – despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system. PMID:29443900
Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.
Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa
2017-03-01
Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.
Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation
Maji, Pradipta; Roy, Shaswati
2015-01-01
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961
Image standards in tissue-based diagnosis (diagnostic surgical pathology).
Kayser, Klaus; Görtler, Jürgen; Goldmann, Torsten; Vollmer, Ekkehard; Hufnagl, Peter; Kayser, Gian
2008-04-18
Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. THEORY AND EXPERIENCES: Images used in tissue-based diagnosis present with pathology-specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease-image combination, human-diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image acquisition systems (resolution, colour temperature, focus, brightness, and quality evaluation procedures), display resolution data, implemented image formats, storage, cycle frequency, backup procedures, operation system, and external system accessibility. The lowest third level describes the permitted limits and threshold in detail. At present, an applicable standard including all mentioned features does not exist to our knowledge; some aspects can be taken from radiological standards (PACS, DICOM 3); others require specific solutions or are not covered yet. The progress in virtual microscopy and application of artificial intelligence (AI) in tissue-based diagnosis demands fast preparation and implementation of an internationally acceptable standard. The described hierarchic order as well as analytic investigation in all potentially necessary aspects and details offers an appropriate tool to specifically determine standardized requirements.
Kherfi, Mohammed Lamine; Ziou, Djemel
2006-04-01
In content-based image retrieval, understanding the user's needs is a challenging task that requires integrating him in the process of retrieval. Relevance feedback (RF) has proven to be an effective tool for taking the user's judgement into account. In this paper, we present a new RF framework based on a feature selection algorithm that nicely combines the advantages of a probabilistic formulation with those of using both the positive example (PE) and the negative example (NE). Through interaction with the user, our algorithm learns the importance he assigns to image features, and then applies the results obtained to define similarity measures that correspond better to his judgement. The use of the NE allows images undesired by the user to be discarded, thereby improving retrieval accuracy. As for the probabilistic formulation of the problem, it presents a multitude of advantages and opens the door to more modeling possibilities that achieve a good feature selection. It makes it possible to cluster the query data into classes, choose the probability law that best models each class, model missing data, and support queries with multiple PE and/or NE classes. The basic principle of our algorithm is to assign more importance to features with a high likelihood and those which distinguish well between PE classes and NE classes. The proposed algorithm was validated separately and in image retrieval context, and the experiments show that it performs a good feature selection and contributes to improving retrieval effectiveness.
Classification of skin cancer images using local binary pattern and SVM classifier
NASA Astrophysics Data System (ADS)
Adjed, Faouzi; Faye, Ibrahima; Ababsa, Fakhreddine; Gardezi, Syed Jamal; Dass, Sarat Chandra
2016-11-01
In this paper, a classification method for melanoma and non-melanoma skin cancer images has been presented using the local binary patterns (LBP). The LBP computes the local texture information from the skin cancer images, which is later used to compute some statistical features that have capability to discriminate the melanoma and non-melanoma skin tissues. Support vector machine (SVM) is applied on the feature matrix for classification into two skin image classes (malignant and benign). The method achieves good classification accuracy of 76.1% with sensitivity of 75.6% and specificity of 76.7%.
In vivo multimodal nonlinear optical imaging of mucosal tissue
NASA Astrophysics Data System (ADS)
Sun, Ju; Shilagard, Tuya; Bell, Brent; Motamedi, Massoud; Vargas, Gracie
2004-05-01
We present a multimodal nonlinear imaging approach to elucidate microstructures and spectroscopic features of oral mucosa and submucosa in vivo. The hamster buccal pouch was imaged using 3-D high resolution multiphoton and second harmonic generation microscopy. The multimodal imaging approach enables colocalization and differentiation of prominent known spectroscopic and structural features such as keratin, epithelial cells, and submucosal collagen at various depths in tissue. Visualization of cellular morphology and epithelial thickness are in excellent agreement with histological observations. These results suggest that multimodal nonlinear optical microscopy can be an effective tool for studying the physiology and pathology of mucosal tissue.
AMOEBA clustering revisited. [cluster analysis, classification, and image display program
NASA Technical Reports Server (NTRS)
Bryant, Jack
1990-01-01
A description of the clustering, classification, and image display program AMOEBA is presented. Using a difficult high resolution aircraft-acquired MSS image, the steps the program takes in forming clusters are traced. A number of new features are described here for the first time. Usage of the program is discussed. The theoretical foundation (the underlying mathematical model) is briefly presented. The program can handle images of any size and dimensionality.
Imaging diagnosis--temporomandibular joint dysplasia in a Basset Hound.
Lerer, Assaf; Chalmers, Heather J; Moens, Noel M M; Mackenzie, Shawn D; Kry, Kristin
2014-01-01
A 5-month-old intact male Basset Hound presented for evaluation of pain and crepitation during manipulation of the temporomandibular joint, worse on the right side. A computed tomography (CT) scan of the head was performed. The CT images demonstrated the osseous features of temporomandibular joint dysplasia and facilitated a 3D reconstruction, which allowed better visualization of the dysplastic features. The patient responded to conservative management with a tape muzzle with no recurrence reported by the owner 6 months after presentation. © 2013 American College of Veterinary Radiology.
Random forest feature selection approach for image segmentation
NASA Astrophysics Data System (ADS)
Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina; Vaida, Mircea Florin
2017-03-01
In the field of image segmentation, discriminative models have shown promising performance. Generally, every such model begins with the extraction of numerous features from annotated images. Most authors create their discriminative model by using many features without using any selection criteria. A more reliable model can be built by using a framework that selects the important variables, from the point of view of the classification, and eliminates the unimportant once. In this article we present a framework for feature selection and data dimensionality reduction. The methodology is built around the random forest (RF) algorithm and its variable importance evaluation. In order to deal with datasets so large as to be practically unmanageable, we propose an algorithm based on RF that reduces the dimension of the database by eliminating irrelevant features. Furthermore, this framework is applied to optimize our discriminative model for brain tumor segmentation.
On-line object feature extraction for multispectral scene representation
NASA Technical Reports Server (NTRS)
Ghassemian, Hassan; Landgrebe, David
1988-01-01
A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.
Kong, Jun; Wang, Fusheng; Teodoro, George; Cooper, Lee; Moreno, Carlos S; Kurc, Tahsin; Pan, Tony; Saltz, Joel; Brat, Daniel
2013-12-01
In this paper, we present a novel framework for microscopic image analysis of nuclei, data management, and high performance computation to support translational research involving nuclear morphometry features, molecular data, and clinical outcomes. Our image analysis pipeline consists of nuclei segmentation and feature computation facilitated by high performance computing with coordinated execution in multi-core CPUs and Graphical Processor Units (GPUs). All data derived from image analysis are managed in a spatial relational database supporting highly efficient scientific queries. We applied our image analysis workflow to 159 glioblastomas (GBM) from The Cancer Genome Atlas dataset. With integrative studies, we found statistics of four specific nuclear features were significantly associated with patient survival. Additionally, we correlated nuclear features with molecular data and found interesting results that support pathologic domain knowledge. We found that Proneural subtype GBMs had the smallest mean of nuclear Eccentricity and the largest mean of nuclear Extent, and MinorAxisLength. We also found gene expressions of stem cell marker MYC and cell proliferation maker MKI67 were correlated with nuclear features. To complement and inform pathologists of relevant diagnostic features, we queried the most representative nuclear instances from each patient population based on genetic and transcriptional classes. Our results demonstrate that specific nuclear features carry prognostic significance and associations with transcriptional and genetic classes, highlighting the potential of high throughput pathology image analysis as a complementary approach to human-based review and translational research.
Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network.
Chi, Jianning; Walia, Ekta; Babyn, Paul; Wang, Jimmy; Groot, Gary; Eramian, Mark
2017-08-01
With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.
Application of wavelet techniques for cancer diagnosis using ultrasound images: A Review.
Sudarshan, Vidya K; Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chandran, Vinod; Molinari, Filippo; Fujita, Hamido; Ng, Kwan Hoong
2016-02-01
Ultrasound is an important and low cost imaging modality used to study the internal organs of human body and blood flow through blood vessels. It uses high frequency sound waves to acquire images of internal organs. It is used to screen normal, benign and malignant tissues of various organs. Healthy and malignant tissues generate different echoes for ultrasound. Hence, it provides useful information about the potential tumor tissues that can be analyzed for diagnostic purposes before therapeutic procedures. Ultrasound images are affected with speckle noise due to an air gap between the transducer probe and the body. The challenge is to design and develop robust image preprocessing, segmentation and feature extraction algorithms to locate the tumor region and to extract subtle information from isolated tumor region for diagnosis. This information can be revealed using a scale space technique such as the Discrete Wavelet Transform (DWT). It decomposes an image into images at different scales using low pass and high pass filters. These filters help to identify the detail or sudden changes in intensity in the image. These changes are reflected in the wavelet coefficients. Various texture, statistical and image based features can be extracted from these coefficients. The extracted features are subjected to statistical analysis to identify the significant features to discriminate normal and malignant ultrasound images using supervised classifiers. This paper presents a review of wavelet techniques used for preprocessing, segmentation and feature extraction of breast, thyroid, ovarian and prostate cancer using ultrasound images. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ultrasonographic evaluation of acute pelvic pain in pregnant and postpartum period.
Park, Sung Bin; Han, Byoung Hee; Lee, Young Ho
2017-04-22
Acute pelvic pain in pregnant and postpartum patients presents diagnostic and therapeutic challenges. The interpretation of imaging findings in these patients is influenced by the knowledge of the physiological changes that occur during the pregnant and postpartum period, as well as by the clinical history. Ultrasonography remains the primary imaging investigation of the pregnant and postpartum women. This article describes the causes and imaging features of acute pelvic pain in pregnant and postpartum period and suggests characteristics to such diseases, focusing on the ultrasonography features that allow one to arrive at the corrective diagnosis. Knowledge of the clinical settings and imaging features of acute pelvic pain in pregnant and postpartum period can lead to accurate diagnosis and appropriate management of the condition.
A phantom design for assessment of detectability in PET imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollenweber, Scott D., E-mail: scott.wollenweber@g
2016-09-15
Purpose: The primary clinical role of positron emission tomography (PET) imaging is the detection of anomalous regions of {sup 18}F-FDG uptake, which are often indicative of malignant lesions. The goal of this work was to create a task-configurable fillable phantom for realistic measurements of detectability in PET imaging. Design goals included simplicity, adjustable feature size, realistic size and contrast levels, and inclusion of a lumpy (i.e., heterogeneous) background. Methods: The detection targets were hollow 3D-printed dodecahedral nylon features. The exostructure sphere-like features created voids in a background of small, solid non-porous plastic (acrylic) spheres inside a fillable tank. The featuresmore » filled at full concentration while the background concentration was reduced due to filling only between the solid spheres. Results: Multiple iterations of feature size and phantom construction were used to determine a configuration at the limit of detectability for a PET/CT system. A full-scale design used a 20 cm uniform cylinder (head-size) filled with a fixed pattern of features at a contrast of approximately 3:1. Known signal-present and signal-absent PET sub-images were extracted from multiple scans of the same phantom and with detectability in a challenging (i.e., useful) range. These images enabled calculation and comparison of the quantitative observer detectability metrics between scanner designs and image reconstruction methods. The phantom design has several advantages including filling simplicity, wall-less contrast features, the control of the detectability range via feature size, and a clinically realistic lumpy background. Conclusions: This phantom provides a practical method for testing and comparison of lesion detectability as a function of imaging system, acquisition parameters, and image reconstruction methods and parameters.« less
Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix
NASA Astrophysics Data System (ADS)
Lange, Holger
2005-04-01
Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.
Classification and pose estimation of objects using nonlinear features
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.
Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina
2015-04-01
An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment.
Multispectral Analysis of NMR Imagery
NASA Technical Reports Server (NTRS)
Butterfield, R. L.; Vannier, M. W. And Associates; Jordan, D.
1985-01-01
Conference paper discusses initial efforts to adapt multispectral satellite-image analysis to nuclear magnetic resonance (NMR) scans of human body. Flexibility of these techniques makes it possible to present NMR data in variety of formats, including pseudocolor composite images of pathological internal features. Techniques do not have to be greatly modified from form in which used to produce satellite maps of such Earth features as water, rock, or foliage.
Computer aided diagnosis based on medical image processing and artificial intelligence methods
NASA Astrophysics Data System (ADS)
Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.
2006-12-01
Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.
Application of volume rendering technique (VRT) for musculoskeletal imaging.
Darecki, Rafał
2002-10-30
A review of the applications of volume rendering technique in musculoskeletal three-dimensional imaging from CT data. General features, potential and indications for applying the method are presented.
Seeing the invisible: The scope and limits of unconscious processing in binocular rivalry
Lin, Zhicheng; He, Sheng
2009-01-01
When an image is presented to one eye and a very different image is presented to the corresponding location of the other eye, they compete for perceptual dominance, such that only one image is visible at a time while the other is suppressed. Called binocular rivalry, this phenomenon and its deviants have been extensively exploited to study the mechanism and neural correlates of consciousness. In this paper, we propose a framework, the unconscious binding hypothesis, to distinguish unconscious and conscious processing. According to this framework, the unconscious mind not only encodes individual features but also temporally binds distributed features to give rise to cortical representation, but unlike conscious binding, such unconscious binding is fragile. Under this framework, we review evidence from psychophysical and neuroimaging studies, which suggests that: (1) for invisible low level features, prolonged exposure to visual pattern and simple translational motion can alter the appearance of subsequent visible features (i.e. adaptation); for invisible high level features, although complex spiral motion cannot produce adaptation, nor can objects/words enhance subsequent processing of related stimuli (i.e. priming), images of objects such as tools can nevertheless activate the dorsal pathway; and (2) although invisible central cues cannot orient attention, invisible erotic pictures in the periphery can nevertheless guide attention, likely through emotional arousal; reciprocally, the processing of invisible information can be modulated by attention at perceptual and neural levels. PMID:18824061
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-01-01
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519
Significance of the impact of motion compensation on the variability of PET image features
NASA Astrophysics Data System (ADS)
Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.
2018-03-01
In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r > 0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the compensation of tumor motion did not have a significant impact on the quantitative PET parameters. The variability of PET parameters due to voxel size in image reconstruction was more significant than variability due to voxel size in image post-resampling. In conclusion, most of the parameters (apart from the contrast of neighborhood matrix) were robust to the motion compensation implied by 4D-PET/CT. The impact on parameter variability due to the voxel size in image reconstruction and in image post-resampling could not be assumed to be equivalent.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
Automatic Detection of Blue-White Veil and Related Structures in Dermoscopy Images
Celebi, M. Emre; Iyatomi, Hitoshi; Stoecker, William V.; Moss, Randy H.; Rabinovitz, Harold S.; Argenziano, Giuseppe; Soyer, H. Peter
2011-01-01
Dermoscopy is a non-invasive skin imaging technique, which permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. One of the most important features for the diagnosis of melanoma in dermoscopy images is the blue-white veil (irregular, structureless areas of confluent blue pigmentation with an overlying white “ground-glass” film). In this article, we present a machine learning approach to the detection of blue-white veil and related structures in dermoscopy images. The method involves contextual pixel classification using a decision tree classifier. The percentage of blue-white areas detected in a lesion combined with a simple shape descriptor yielded a sensitivity of 69.35% and a specificity of 89.97% on a set of 545 dermoscopy images. The sensitivity rises to 78.20% for detection of blue veil in those cases where it is a primary feature for melanoma recognition. PMID:18804955
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
Using CNN Features to Better Understand What Makes Visual Artworks Special.
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond ( richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature.
Using CNN Features to Better Understand What Makes Visual Artworks Special
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond (richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature. PMID:28588537
Automatic orientation and 3D modelling from markerless rock art imagery
NASA Astrophysics Data System (ADS)
Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.
2013-02-01
This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.
Feature selection and classification of multiparametric medical images using bagging and SVM
NASA Astrophysics Data System (ADS)
Fan, Yong; Resnick, Susan M.; Davatzikos, Christos
2008-03-01
This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.
Automated texture-based identification of ovarian cancer in confocal microendoscope images
NASA Astrophysics Data System (ADS)
Srivastava, Saurabh; Rodriguez, Jeffrey J.; Rouse, Andrew R.; Brewer, Molly A.; Gmitro, Arthur F.
2005-03-01
The fluorescence confocal microendoscope provides high-resolution, in-vivo imaging of cellular pathology during optical biopsy. There are indications that the examination of human ovaries with this instrument has diagnostic implications for the early detection of ovarian cancer. The purpose of this study was to develop a computer-aided system to facilitate the identification of ovarian cancer from digital images captured with the confocal microendoscope system. To achieve this goal, we modeled the cellular-level structure present in these images as texture and extracted features based on first-order statistics, spatial gray-level dependence matrices, and spatial-frequency content. Selection of the best features for classification was performed using traditional feature selection techniques including stepwise discriminant analysis, forward sequential search, a non-parametric method, principal component analysis, and a heuristic technique that combines the results of these methods. The best set of features selected was used for classification, and performance of various machine classifiers was compared by analyzing the areas under their receiver operating characteristic curves. The results show that it is possible to automatically identify patients with ovarian cancer based on texture features extracted from confocal microendoscope images and that the machine performance is superior to that of the human observer.
a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image
NASA Astrophysics Data System (ADS)
Li, L.; Yang, H.; Chen, Q.; Liu, X.
2018-04-01
Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.
a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization
NASA Astrophysics Data System (ADS)
Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.
2017-07-01
Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.
Geomorphic evidence for ancient seas on Mars
NASA Technical Reports Server (NTRS)
Parker, Timothy J.; Schneeberger, Dale M.; Pieri, David C.; Saunders, R. Stephen
1987-01-01
Geomorphic evidence is presented for ancient seas on Mars. Several features, similar to terrestrial lacustrine and coastal features, were identified along the northern plains periphery from Viking images. The nature of these features argues for formation in a predominantly liquid, shallow body of standing water. Such a shallow sea would require either relatively rapid development of shoreline morphologies or a warmer than present climate at the time of outflow channel formation.
Feature detection on 3D images of dental imprints
NASA Astrophysics Data System (ADS)
Mokhtari, Marielle; Laurendeau, Denis
1994-09-01
A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.
Automated detection of pulmonary nodules in CT images with support vector machines
NASA Astrophysics Data System (ADS)
Liu, Lu; Liu, Wanyu; Sun, Xiaoming
2008-10-01
Many methods have been proposed to avoid radiologists fail to diagnose small pulmonary nodules. Recently, support vector machines (SVMs) had received an increasing attention for pattern recognition. In this paper, we present a computerized system aimed at pulmonary nodules detection; it identifies the lung field, extracts a set of candidate regions with a high sensitivity ratio and then classifies candidates by the use of SVMs. The Computer Aided Diagnosis (CAD) system presented in this paper supports the diagnosis of pulmonary nodules from Computed Tomography (CT) images as inflammation, tuberculoma, granuloma..sclerosing hemangioma, and malignant tumor. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of SVMs classifiers. The achieved classification performance was 100%, 92.75% and 90.23% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.
Candidate cave entrances on Mars
Cushing, Glen E.
2012-01-01
This paper presents newly discovered candidate cave entrances into Martian near-surface lava tubes, volcano-tectonic fracture systems, and pit craters and describes their characteristics and exploration possibilities. These candidates are all collapse features that occur either intermittently along laterally continuous trench-like depressions or in the floors of sheer-walled atypical pit craters. As viewed from orbit, locations of most candidates are visibly consistent with known terrestrial features such as tube-fed lava flows, volcano-tectonic fractures, and pit craters, each of which forms by mechanisms that can produce caves. Although we cannot determine subsurface extents of the Martian features discussed here, some may continue unimpeded for many kilometers if terrestrial examples are indeed analogous. The features presented here were identified in images acquired by the Mars Odyssey's Thermal Emission Imaging System visible-wavelength camera, and by the Mars Reconnaissance Orbiter's Context Camera. Select candidates have since been targeted by the High-Resolution Imaging Science Experiment. Martian caves are promising potential sites for future human habitation and astrobiology investigations; understanding their characteristics is critical for long-term mission planning and for developing the necessary exploration technologies.
Thapa, S S; Lakhey, R B; Sharma, P; Pokhrel, R K
2016-05-01
Magnetic resonance imaging is routinely done for diagnosis of lumbar disc prolapse. Many abnormalities of disc are observed even in asymptomatic patient.This study was conducted tocorrelate these abnormalities observed on Magnetic resonance imaging and clinical features of lumbar disc prolapse. A This prospective analytical study includes 57 cases of lumbar disc prolapse presenting to Department of Orthopedics, Tribhuvan University Teaching Hospital from March 2011 to August 2012. All patientshad Magnetic resonance imaging of lumbar spine and the findings regarding type, level and position of lumbar disc prolapse, any neural canal or foraminal compromise was recorded. These imaging findings were then correlated with clinical signs and symptoms. Chi-square test was used to find out p-value for correlation between clinical features and Magnetic resonance imaging findings using SPSS 17.0. This study included 57 patients, with mean age 36.8 years. Of them 41(71.9%) patients had radicular leg pain along specific dermatome. Magnetic resonance imaging showed 104 lumbar disc prolapselevel. Disc prolapse at L4-L5 and L5-S1 level constituted 85.5%.Magnetic resonance imaging findings of neural foramina compromise and nerve root compression were fairly correlated withclinical findings of radicular pain and neurological deficit. Clinical features and Magnetic resonance imaging findings of lumbar discprolasehad faircorrelation, but all imaging abnormalities do not have a clinical significance.
a New Paradigm for Matching - and Aerial Images
NASA Astrophysics Data System (ADS)
Koch, T.; Zhuo, X.; Reinartz, P.; Fraundorfer, F.
2016-06-01
This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.
Noise-gating to Clean Astrophysical Image Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, C. E.
I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less
Noise-gating to Clean Astrophysical Image Data
NASA Astrophysics Data System (ADS)
DeForest, C. E.
2017-04-01
I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.
NASA Astrophysics Data System (ADS)
Du, Hongbo; Al-Jubouri, Hanan; Sellahewa, Harin
2014-05-01
Content-based image retrieval is an automatic process of retrieving images according to image visual contents instead of textual annotations. It has many areas of application from automatic image annotation and archive, image classification and categorization to homeland security and law enforcement. The key issues affecting the performance of such retrieval systems include sensible image features that can effectively capture the right amount of visual contents and suitable similarity measures to find similar and relevant images ranked in a meaningful order. Many different approaches, methods and techniques have been developed as a result of very intensive research in the past two decades. Among many existing approaches, is a cluster-based approach where clustering methods are used to group local feature descriptors into homogeneous regions, and search is conducted by comparing the regions of the query image against those of the stored images. This paper serves as a review of works in this area. The paper will first summarize the existing work reported in the literature and then present the authors' own investigations in this field. The paper intends to highlight not only achievements made by recent research but also challenges and difficulties still remaining in this area.
Research on image complexity evaluation method based on color information
NASA Astrophysics Data System (ADS)
Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo
2017-11-01
In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.
CognitionMaster: an object-based image analysis framework
2013-01-01
Background Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. Results In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines“ that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about bone-osteoclast interaction, which underline the advantages of the object-based concept. Conclusions We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. PMID:23445542
Image-optimized Coronal Magnetic Field Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outsidemore » of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.« less
Imaging findings in craniofacial childhood rhabdomyosarcoma
Merks, Johannes H. M.; Saeed, Peerooz; Balm, Alfons J. M.; Bras, Johannes; Pieters, Bradley R.; Adam, Judit A.; van Rijn, Rick R.
2010-01-01
Rhabdomyosarcoma (RMS) is the commonest paediatric soft-tissue sarcoma constituting 3–5% of all malignancies in childhood. RMS has a predilection for the head and neck area and tumours in this location account for 40% of all childhood RMS cases. In this review we address the clinical and imaging presentations of craniofacial RMS, discuss the most appropriate imaging techniques, present characteristic imaging features and offer an overview of differential diagnostic considerations. Post-treatment changes will be briefly addressed. PMID:20725831
Bruse, Jan L; McLeod, Kristin; Biglino, Giovanni; Ntsinjana, Hopewell N; Capelli, Claudio; Hsia, Tain-Yen; Sermesant, Maxime; Pennec, Xavier; Taylor, Andrew M; Schievano, Silvia
2016-05-31
Medical image analysis in clinical practice is commonly carried out on 2D image data, without fully exploiting the detailed 3D anatomical information that is provided by modern non-invasive medical imaging techniques. In this paper, a statistical shape analysis method is presented, which enables the extraction of 3D anatomical shape features from cardiovascular magnetic resonance (CMR) image data, with no need for manual landmarking. The method was applied to repaired aortic coarctation arches that present complex shapes, with the aim of capturing shape features as biomarkers of potential functional relevance. The method is presented from the user-perspective and is evaluated by comparing results with traditional morphometric measurements. Steps required to set up the statistical shape modelling analyses, from pre-processing of the CMR images to parameter setting and strategies to account for size differences and outliers, are described in detail. The anatomical mean shape of 20 aortic arches post-aortic coarctation repair (CoA) was computed based on surface models reconstructed from CMR data. By analysing transformations that deform the mean shape towards each of the individual patient's anatomy, shape patterns related to differences in body surface area (BSA) and ejection fraction (EF) were extracted. The resulting shape vectors, describing shape features in 3D, were compared with traditionally measured 2D and 3D morphometric parameters. The computed 3D mean shape was close to population mean values of geometric shape descriptors and visually integrated characteristic shape features associated with our population of CoA shapes. After removing size effects due to differences in body surface area (BSA) between patients, distinct 3D shape features of the aortic arch correlated significantly with EF (r = 0.521, p = .022) and were well in agreement with trends as shown by traditional shape descriptors. The suggested method has the potential to discover previously unknown 3D shape biomarkers from medical imaging data. Thus, it could contribute to improving diagnosis and risk stratification in complex cardiac disease.
Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor)
2015-01-01
A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.
Visual Pattern Analysis in Histopathology Images Using Bag of Features
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Caicedo, Juan C.; González, Fabio A.
This paper presents a framework to analyse visual patterns in a collection of medical images in a two stage procedure. First, a set of representative visual patterns from the image collection is obtained by constructing a visual-word dictionary under a bag-of-features approach. Second, an analysis of the relationships between visual patterns and semantic concepts in the image collection is performed. The most important visual patterns for each semantic concept are identified using correlation analysis. A matrix visualization of the structure and organization of the image collection is generated using a cluster analysis. The experimental evaluation was conducted on a histopathology image collection and results showed clear relationships between visual patterns and semantic concepts, that in addition, are of easy interpretation and understanding.
Image segmentation via foreground and background semantic descriptors
NASA Astrophysics Data System (ADS)
Yuan, Ding; Qiang, Jingjing; Yin, Jihao
2017-09-01
In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.
Sensor feature fusion for detecting buried objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.
1993-04-01
Given multiple registered images of the earth`s surface from dual-band sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. The sensor suite currently includes two sensors (5 micron and 10 micron wavelengths) and one ground penetrating radar (GPR) of the wide-band pulsed synthetic aperture type. We use a supervised teaming pattern recognition approach to detect metal and plastic land mines buried in soil. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in amore » two step process to classify a subimage. Thee first step, referred to as feature selection, determines the features of sub-images which result in the greatest separability among the classes. The second step, image labeling, uses the selected features and the decisions from a pattern classifier to label the regions in the image which are likely to correspond to buried mines. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the sensors add value to the detection system. The most important features from the various sensors are fused using supervised teaming pattern classifiers (including neural networks). We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing feature information from multiple sensor types, including dual-band infrared and ground penetrating radar. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved operational problem of detecting buried land mines from an airborne standoff platform.« less
Multi-test cervical cancer diagnosis with missing data estimation
NASA Astrophysics Data System (ADS)
Xu, Tao; Huang, Xiaolei; Kim, Edward; Long, L. Rodney; Antani, Sameer
2015-03-01
Cervical cancer is a leading most common type of cancer for women worldwide. Existing screening programs for cervical cancer suffer from low sensitivity. Using images of the cervix (cervigrams) as an aid in detecting pre-cancerous changes to the cervix has good potential to improve sensitivity and help reduce the number of cervical cancer cases. In this paper, we present a method that utilizes multi-modality information extracted from multiple tests of a patient's visit to classify the patient visit to be either low-risk or high-risk. Our algorithm integrates image features and text features to make a diagnosis. We also present two strategies to estimate the missing values in text features: Image Classifier Supervised Mean Imputation (ICSMI) and Image Classifier Supervised Linear Interpolation (ICSLI). We evaluate our method on a large medical dataset and compare it with several alternative approaches. The results show that the proposed method with ICSLI strategy achieves the best result of 83.03% specificity and 76.36% sensitivity. When higher specificity is desired, our method can achieve 90% specificity with 62.12% sensitivity.
Shi, Lei; Wan, Youchuan; Gao, Xianjun
2018-01-01
In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721
Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra
2017-05-01
Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.
NASA Technical Reports Server (NTRS)
Fitzpatrick, Austin J.; Novati, Alexander; Fisher, Diane K.; Leon, Nancy J.; Netting, Ruth
2013-01-01
Space Place Prime is public engagement and education software for use on iPad. It targets a multi-generational audience with news, images, videos, and educational articles from the Space Place Web site and other NASA sources. New content is downloaded daily (or whenever the user accesses the app) via the wireless connection. In addition to the Space Place Web site, several NASA RSS feeds are tapped to provide new content. Content is retained for the previous several days, or some number of editions of each feed. All content is controlled on the server side, so features about the latest news, or changes to any content, can be made without updating the app in the Apple Store. It gathers many popular NASA features into one app. The interface is a boundless, slidable- in-any-direction grid of images, unique for each feature, and iconized as image, video, or article. A tap opens the feature. An alternate list mode presents menus of images, videos, and articles separately. Favorites can be tagged for permanent archive. Face - book, Twitter, and e-mail connections make any feature shareable.
Image retrieval for identifying house plants
NASA Astrophysics Data System (ADS)
Kebapci, Hanife; Yanikoglu, Berrin; Unal, Gozde
2010-02-01
We present a content-based image retrieval system for plant identification which is intended for providing users with a simple method to locate information about their house plants. A plant image consists of a collection of overlapping leaves and possibly flowers, which makes the problem challenging. We studied the suitability of various well-known color, texture and shape features for this problem, as well as introducing some new ones. The features are extracted from the general plant region that is segmented from the background using the max-flow min-cut technique. Results on a database of 132 different plant images show promise (in about 72% of the queries, the correct plant image is retrieved among the top-15 results).
NASA Astrophysics Data System (ADS)
Li, Jing; Xie, Weixin; Pei, Jihong
2018-03-01
Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.
Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng
2015-12-01
We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion.
Online prediction of organileptic data for snack food using color images
NASA Astrophysics Data System (ADS)
Yu, Honglu; MacGregor, John F.
2004-11-01
In this paper, a study for the prediction of organileptic properties of snack food in real-time using RGB color images is presented. The so-called organileptic properties, which are properties based on texture, taste and sight, are generally measured either by human sensory response or by mechanical devices. Neither of these two methods can be used for on-line feedback control in high-speed production. In this situation, a vision-based soft sensor is very attractive. By taking images of the products, the samples remain untouched and the product properties can be predicted in real time from image data. Four types of organileptic properties are considered in this study: blister level, toast points, taste and peak break force. Wavelet transform are applied on the color images and the averaged absolute value for each filtered image is used as texture feature variable. In order to handle the high correlation among the feature variables, Partial Least Squares (PLS) is used to regress the extracted feature variables against the four response variables.
Thekkek, Nadhi; Lee, Michelle H.; Polydorides, Alexandros D.; Rosen, Daniel G.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2015-01-01
Abstract. Current imaging tools are associated with inconsistent sensitivity and specificity for detection of Barrett’s-associated neoplasia. Optical imaging has shown promise in improving the classification of neoplasia in vivo. The goal of this pilot study was to evaluate whether in vivo vital dye fluorescence imaging (VFI) has the potential to improve the accuracy of early-detection of Barrett’s-associated neoplasia. In vivo endoscopic VFI images were collected from 65 sites in 14 patients with confirmed Barrett’s esophagus (BE), dysplasia, or esophageal adenocarcinoma using a modular video endoscope and a high-resolution microendoscope (HRME). Qualitative image features were compared to histology; VFI and HRME images show changes in glandular structure associated with neoplastic progression. Quantitative image features in VFI images were identified for objective image classification of metaplasia and neoplasia, and a diagnostic algorithm was developed using leave-one-out cross validation. Three image features extracted from VFI images were used to classify tissue as neoplastic or not with a sensitivity of 87.8% and a specificity of 77.6% (AUC=0.878). A multimodal approach incorporating VFI and HRME imaging can delineate epithelial changes present in Barrett’s-associated neoplasia. Quantitative analysis of VFI images may provide a means for objective interpretation of BE during surveillance. PMID:25950645
Thekkek, Nadhi; Lee, Michelle H; Polydorides, Alexandros D; Rosen, Daniel G; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2015-05-01
Current imaging tools are associated with inconsistent sensitivity and specificity for detection of Barrett's-associated neoplasia. Optical imaging has shown promise in improving the classification of neoplasia in vivo. The goal of this pilot study was to evaluate whether in vivo vital dye fluorescence imaging (VFI) has the potential to improve the accuracy of early-detection of Barrett's-associated neoplasia. In vivo endoscopic VFI images were collected from 65 sites in 14 patients with confirmed Barrett's esophagus (BE), dysplasia, oresophageal adenocarcinoma using a modular video endoscope and a high-resolution microendoscope(HRME). Qualitative image features were compared to histology; VFI and HRME images show changes in glandular structure associated with neoplastic progression. Quantitative image features in VFI images were identified for objective image classification of metaplasia and neoplasia, and a diagnostic algorithm was developed using leave-one-out cross validation. Three image features extracted from VFI images were used to classify tissue as neoplastic or not with a sensitivity of 87.8% and a specificity of 77.6% (AUC = 0.878). A multimodal approach incorporating VFI and HRME imaging can delineate epithelial changes present in Barrett's-associated neoplasia. Quantitative analysis of VFI images may provide a means for objective interpretation of BE during surveillance.
Regional shape-based feature space for segmenting biomedical images using neural networks
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Gopal; Hoford, John D.; Hoffman, Eric A.
1993-07-01
In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.
PTBS segmentation scheme for synthetic aperture radar
NASA Astrophysics Data System (ADS)
Friedland, Noah S.; Rothwell, Brian J.
1995-07-01
The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.
Real-time machine vision system using FPGA and soft-core processor
NASA Astrophysics Data System (ADS)
Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad
2012-06-01
This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.
NASA Astrophysics Data System (ADS)
Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-03-01
This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
Primary breast leiomyosarcoma and synchronous homolateral lung cancer: a case report
Meroni, Stefano; Voulaz, Emanuele; Alloisio, Marco; De Sanctis, Rita; Bossi, Paola; Cariboni, Umberto; De Simone, Matilde; Cioffi, Ugo
2017-01-01
Radiological and histological features of breast leiomyosarcoma can mimic a wide variety of other breast lesions, such as mesenchymal tumors, breast lymphomas, poorly differentiated carcinomas and metaplastic breast carcinomas. The authors present the case of a 62-year-old woman with a primary breast leiomyosarcoma with synchronous ipsilateral lung adenocarcinoma. The latter was an incidental finding during pre-surgical staging examinations. Clinicopathological, immunophenotypic and imaging features cancer are described. A brief review of the literature on imaging findings and management of breast leiomyosarcoma is presented. The authors discuss the differential diagnoses in breast imaging and of the extra-mammary incidental findings. Surgical resection remains the cornerstone of treatment, while radiation therapy and chemotherapy remain to be defined on a single-patient basis. PMID:29312765
NASA Astrophysics Data System (ADS)
Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan
2017-05-01
Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.
NASA Astrophysics Data System (ADS)
Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit
2016-03-01
We explore the combination of text metadata, such as patients' age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.
A Kinect based sign language recognition system using spatio-temporal features
NASA Astrophysics Data System (ADS)
Memiş, Abbas; Albayrak, Songül
2013-12-01
This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.
A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsap, L V; Zhang, Y; Goldgof, D B
2004-04-02
A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars ismore » quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.« less
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Intelligent Image Analysis for Image-Guided Laser Hair Removal and Skin Therapy
NASA Technical Reports Server (NTRS)
Walker, Brian; Lu, Thomas; Chao, Tien-Hsin
2012-01-01
We present the development of advanced automatic target recognition (ATR) algorithms for the hair follicles identification in digital skin images to accurately direct the laser beam to remove the hair. The ATR system first performs a wavelet filtering to enhance the contrast of the hair features in the image. The system then extracts the unique features of the targets and sends the features to an Adaboost based classifier for training and recognition operations. The ATR system automatically classifies the hair, moles, or other skin lesion and provides the accurate coordinates of the intended hair follicle locations. The coordinates can be used to guide a scanning laser to focus energy only on the hair follicles. The intended benefit would be to protect the skin from unwanted laser exposure and to provide more effective skin therapy.
Fire detection system using random forest classification for image sequences of complex background
NASA Astrophysics Data System (ADS)
Kim, Onecue; Kang, Dong-Joong
2013-06-01
We present a fire alarm system based on image processing that detects fire accidents in various environments. To reduce false alarms that frequently appeared in earlier systems, we combined image features including color, motion, and blinking information. We specifically define the color conditions of fires in hue, saturation and value, and RGB color space. Fire features are represented as intensity variation, color mean and variance, motion, and image differences. Moreover, blinking fire features are modeled by using crossing patches. We propose an algorithm that classifies patches into fire or nonfire areas by using random forest supervised learning. We design an embedded surveillance device made with acrylonitrile butadiene styrene housing for stable fire detection in outdoor environments. The experimental results show that our algorithm works robustly in complex environments and is able to detect fires in real time.
Development of a customizable software application for medical imaging analysis and visualization.
Martinez-Escobar, Marisol; Peloquin, Catherine; Juhnke, Bethany; Peddicord, Joanna; Jose, Sonia; Noon, Christian; Foo, Jung Leng; Winer, Eliot
2011-01-01
Graphics technology has extended medical imaging tools to the hands of surgeons and doctors, beyond the radiology suite. However, a common issue in most medical imaging software is the added complexity for non-radiologists. This paper presents the development of a unique software toolset that is highly customizable and targeted at the general physicians as well as the medical specialists. The core functionality includes features such as viewing medical images in two-and three-dimensional representations, clipping, tissue windowing, and coloring. Additional features can be loaded in the form of 'plug-ins' such as tumor segmentation, tissue deformation, and surgical planning. This allows the software to be lightweight and easy to use while still giving the user the flexibility of adding the necessary features, thus catering to a wide range of user population.
Texture analysis based on the Hermite transform for image classification and segmentation
NASA Astrophysics Data System (ADS)
Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus
2012-06-01
Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.
Polarized Heliospheric Imaging: Lessons, Benefits, Challenges, and Status (Invited)
NASA Astrophysics Data System (ADS)
DeForest, C. E.; Howard, T. A.
2013-12-01
STEREO has delivered on the promise of continuous, photometric imaging of coronal and heliospheric transients from Sun to Earth. It is time to explore polarized heliospheric imaging. Applications include 3-D location of individual features and improved separation of signal from background. These scientific applications have different advantages and challenges in the heliosphere than the corona. We present analytical and numerical results on 3-D location of features both large and small with polarized heliospheric imaging; describe advantages to polarimetry for both in-ecliptic and out-of-ecliptic missions; and discuss some of the design considerations for PHI-C, our proposed mission to prototype this technology from LEO.
Automated detection of new impact sites on Martian surface from HiRISE images
NASA Astrophysics Data System (ADS)
Xin, Xin; Di, Kaichang; Wang, Yexin; Wan, Wenhui; Yue, Zongyu
2017-10-01
In this study, an automated method for Martian new impact site detection from single images is presented. It first extracts dark areas in full high resolution image, then detects new impact craters within dark areas using a cascade classifier which combines local binary pattern features and Haar-like features trained by an AdaBoost machine learning algorithm. Experimental results using 100 HiRISE images show that the overall detection rate of proposed method is 84.5%, with a true positive rate of 86.9%. The detection rate and true positive rate in the flat regions are 93.0% and 91.5%, respectively.
Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery.
Hu, Weiling; Zhang, Xu; Wang, Bin; Liu, Jiquan; Duan, Huilong; Dai, Ning; Si, Jianmin
2016-01-01
Image registration is a key component of computer assistance in image guided surgery, and it is a challenging topic in endoscopic environments. In this study, we present a method for image registration named Homographic Patch Feature Transform (HPFT) to match gastroscopic images. HPFT can be used for tracking lesions and augmenting reality applications during gastroscopy. Furthermore, an overall evaluation scheme is proposed to validate the precision, robustness and uniformity of the registration results, which provides a standard for rejection of false matching pairs from corresponding results. Finally, HPFT is applied for processing in vivo gastroscopic data. The experimental results show that HPFT has stable performance in gastroscopic applications.
Maher, M M; Kennedy, J; Hynes, D; Murray, J G; O'Connell, D
2000-03-01
We describe the imaging features of a giant geode of the distal humerus in a patient with rheumatoid arthritis, which presented initially as a pathological fracture. The value of magnetic resonance imaging in establishing this diagnosis is emphasized.
The sunward continuum feature of Comet 45P/Honda-Mrkos-Pajdušáková
NASA Astrophysics Data System (ADS)
Mueller, Beatrice E. A.; Samarasinha, Nalin H.; Harris, Walter M.; Springmann, Alessondra; Lejoly, Cassandra; Bodnarik, Julia; Howell, Ellen S.; Ryan, Erin L.; Kikwaya Eluo, Jean-Baptiste; Ryleigh Fitzpatrick, M.; Watson, Zachary Tyler; Maciel, Ricardo; Macieira Mitchell, Adriana; Scotti, James Vernon
2017-10-01
We will present results of our investigation of the sunward continuum feature of comet 45P/Honda-Mrkos-Pajdušáková (HMP). HMP was observed in 2017 at the University of Arizona’s Kuiper 61’’ telescope on Mount Bigelow on February 8, 9, 10, 16, and March 7 with the Mont4K camera, and at the Bok 2.3m telescope on Kitt Peak on February 16 and 17 with the 90Prime imager. The heliocentric distance of HMP varied from 0.94 au to 1.32 au, the geocentric distance from 0.08 au to 0.34 au, and the solar phase angle from 15 deg to 119 deg during that time period. The sunward continuum feature is present in all our images. Position angle variations and radial spatial profiles of the feature, as well as deduced physical parameters will be discussed.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2017-01-01
Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches. PMID:28771497
An explorative childhood pneumonia analysis based on ultrasonic imaging texture features
NASA Astrophysics Data System (ADS)
Zenteno, Omar; Diaz, Kristians; Lavarello, Roberto; Zimic, Mirko; Correa, Malena; Mayta, Holger; Anticona, Cynthia; Pajuelo, Monica; Oberhelman, Richard; Checkley, William; Gilman, Robert H.; Figueroa, Dante; Castañeda, Benjamín.
2015-12-01
According to World Health Organization, pneumonia is the respiratory disease with the highest pediatric mortality rate accounting for 15% of all deaths of children under 5 years old worldwide. The diagnosis of pneumonia is commonly made by clinical criteria with support from ancillary studies and also laboratory findings. Chest imaging is commonly done with chest X-rays and occasionally with a chest CT scan. Lung ultrasound is a promising alternative for chest imaging; however, interpretation is subjective and requires adequate training. In the present work, a two-class classification algorithm based on four Gray-level co-occurrence matrix texture features (i.e., Contrast, Correlation, Energy and Homogeneity) extracted from lung ultrasound images from children aged between six months and five years is presented. Ultrasound data was collected using a L14-5/38 linear transducer. The data consisted of 22 positive- and 68 negative-diagnosed B-mode cine-loops selected by a medical expert and captured in the facilities of the Instituto Nacional de Salud del Niño (Lima, Peru), for a total number of 90 videos obtained from twelve children diagnosed with pneumonia. The classification capacity of each feature was explored independently and the optimal threshold was selected by a receiver operator characteristic (ROC) curve analysis. In addition, a principal component analysis was performed to evaluate the combined performance of all the features. Contrast and correlation resulted the two more significant features. The classification performance of these two features by principal components was evaluated. The results revealed 82% sensitivity, 76% specificity, 78% accuracy and 0.85 area under the ROC.
Congenital portosystemic shunts: imaging findings and clinical presentations in 11 patients.
Konstas, Angelos A; Digumarthy, Subba R; Avery, Laura L; Wallace, Karen L; Lisovsky, Mikhail; Misdraji, Joseph; Hahn, Peter F
2011-11-01
To evaluate the clinical anatomy and presentations of congenital portosystemic shunts, and determine features that promote recognition on imaging. Institutional review board approval was obtained for this HIPAA-compliant study. The requirement for written informed consent was waived. Radiology reports were retrospectively reviewed from non-cirrhotic patients who underwent imaging studies from January 1999 through February 2009. Clinical sources reviewed included electronic medical records, archived images and histopathological material. Eleven patients with congenital portosystemic shunts were identified (six male and five female; age range 20 days to 84 years). Seven patients had extrahepatic and four patients had intrahepatic shunts. All 11 patients had absent or hypoplastic intrahepatic portal veins, a feature detected by CT and MRI, but not by US. Seven patients presented with shunt complications and four with presentations unrelated to shunt pathophysiology. Three adult patients had four splenic artery aneurysms. Prospective radiological evaluation of five adult patients with cross-sectional imaging had failed prospectively to recognize the presence of congenital portosystemic shunts on one or more imaging examinations. Congenital portosystemic shunts are associated with splenic artery aneurysms, a previously unrecognized association. Portosystemic shunts were undetected during prospective radiologic evaluation in the majority of adult patients, highlighting the need to alert radiologists to this congenital anomaly. Copyright © 2010. Published by Elsevier Ireland Ltd.
Image-based modeling of tumor shrinkage in head and neck radiation therapy1
Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei
2010-01-01
Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569
Pareek, Gyan; Acharya, U Rajendra; Sree, S Vinitha; Swapna, G; Yantri, Ratna; Martis, Roshan Joy; Saba, Luca; Krishnamurthi, Ganapathy; Mallarini, Giorgio; El-Baz, Ayman; Al Ekish, Shadi; Beland, Michael; Suri, Jasjit S
2013-12-01
In this work, we have proposed an on-line computer-aided diagnostic system called "UroImage" that classifies a Transrectal Ultrasound (TRUS) image into cancerous or non-cancerous with the help of non-linear Higher Order Spectra (HOS) features and Discrete Wavelet Transform (DWT) coefficients. The UroImage system consists of an on-line system where five significant features (one DWT-based feature and four HOS-based features) are extracted from the test image. These on-line features are transformed by the classifier parameters obtained using the training dataset to determine the class. We trained and tested six classifiers. The dataset used for evaluation had 144 TRUS images which were split into training and testing sets. Three-fold and ten-fold cross-validation protocols were adopted for training and estimating the accuracy of the classifiers. The ground truth used for training was obtained using the biopsy results. Among the six classifiers, using 10-fold cross-validation technique, Support Vector Machine and Fuzzy Sugeno classifiers presented the best classification accuracy of 97.9% with equally high values for sensitivity, specificity and positive predictive value. Our proposed automated system, which achieved more than 95% values for all the performance measures, can be an adjunct tool to provide an initial diagnosis for the identification of patients with prostate cancer. The technique, however, is limited by the limitations of 2D ultrasound guided biopsy, and we intend to improve our technique by using 3D TRUS images in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Il-Joong; Seon, Kwang-Il; Han, Wonyong
We present the improved far-ultraviolet (FUV) emission-line images of the entire Vela supernova remnant (SNR) using newly processed Spectroscopy of Plasma Evolution from Astrophysical Radiation/Far-Ultraviolet Imaging Spectrograph (SPEAR/FIMS) data. The incomplete C III {lambda}977 and O VI {lambda}{lambda}1032, 1038 images presented in the previous study are updated to cover the whole region. The C IV {lambda}{lambda}1548, 1551 image with a higher resolution and new images at Si IV {lambda}{lambda}1394, 1403, O IV] {lambda}1404, He II {lambda}1640.5, and O III] {lambda}{lambda}1661, 1666 are also shown. Comparison of emission-line ratios for two enhanced FUV regions reveals that the FUV emissions of themore » east-enhanced FUV region may be affected by nonradiative shocks of another very young SNR, the Vela Jr. SNR (RX J0852.0-4622, G266.6-1.2). This result is the first FUV detection that is likely associated with the Vela Jr. SNR, supporting previous arguments that the Vela Jr. SNR is close to us. The comparison of the improved FUV images with soft X-ray images shows that an FUV filamentary feature forms the boundary of the northeast-southwest asymmetrical sections of the X-ray shell. The southwest FUV features are characterized as the region where the Vela SNR is interacting with slightly denser ambient medium within the dim X-ray southwest section. From a comparison with the H{alpha} image, we identify a ring-like H{alpha} feature overlapped with an extended hot X-ray feature of similar size and two local peaks of C IV emission. Their morphologies are expected when the H{alpha} ring is in direct contact with the near or far side of the Vela SNR.« less
Correlative feature analysis of FFDM images
NASA Astrophysics Data System (ADS)
Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene
2008-03-01
Identifying the corresponding image pair of a lesion is an essential step for combining information from different views of the lesion to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially applied to extract mass lesions from the surrounding tissues. Then various lesion features were automatically extracted from each of the two views of each lesion to quantify the characteristics of margin, shape, size, texture and context of the lesion, as well as its distance to nipple. We employed a two-step method to select an effective subset of features, and combined it with a BANN to obtain a discriminant score, which yielded an estimate of the probability that the two images are of the same physical lesion. ROC analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing between corresponding and non-corresponding pairs. By using a FFDM database with 124 corresponding image pairs and 35 non-corresponding pairs, the distance feature yielded an AUC (area under the ROC curve) of 0.8 with leave-one-out evaluation by lesion, and the feature subset, which includes distance feature, lesion size and lesion contrast, yielded an AUC of 0.86. The improvement by using multiple features was statistically significant as compared to single feature performance. (p<0.001)
NASA Astrophysics Data System (ADS)
Chen, Po-Hao; Botzolakis, Emmanuel; Mohan, Suyash; Bryan, R. N.; Cook, Tessa
2016-03-01
In radiology, diagnostic errors occur either through the failure of detection or incorrect interpretation. Errors are estimated to occur in 30-35% of all exams and contribute to 40-54% of medical malpractice litigations. In this work, we focus on reducing incorrect interpretation of known imaging features. Existing literature categorizes cognitive bias leading a radiologist to an incorrect diagnosis despite having correctly recognized the abnormal imaging features: anchoring bias, framing effect, availability bias, and premature closure. Computational methods make a unique contribution, as they do not exhibit the same cognitive biases as a human. Bayesian networks formalize the diagnostic process. They modify pre-test diagnostic probabilities using clinical and imaging features, arriving at a post-test probability for each possible diagnosis. To translate Bayesian networks to clinical practice, we implemented an entirely web-based open-source software tool. In this tool, the radiologist first selects a network of choice (e.g. basal ganglia). Then, large, clearly labeled buttons displaying salient imaging features are displayed on the screen serving both as a checklist and for input. As the radiologist inputs the value of an extracted imaging feature, the conditional probabilities of each possible diagnosis are updated. The software presents its level of diagnostic discrimination using a Pareto distribution chart, updated with each additional imaging feature. Active collaboration with the clinical radiologist is a feasible approach to software design and leads to design decisions closely coupling the complex mathematics of conditional probability in Bayesian networks with practice.
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Rostang, Johan; Avanaki, Ali; Espig, Kathryn; Xthona, Albert; Cocuranu, Ioan; Parwani, Anil V.; Pantanowitz, Liron
2014-03-01
Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.
Cassini UVIS Auroral Observations in 2016 and 2017
NASA Astrophysics Data System (ADS)
Pryor, Wayne R.; Esposito, Larry W.; Jouchoux, Alain; Radioti, Aikaterini; Grodent, Denis; Gustin, Jacques; Gerard, Jean-Claude; Lamy, Laurent; Badman, Sarah; Dyudina, Ulyana A.; Cassini UVIS Team, Cassini VIMS Team, Cassini ISS Team, HST Saturn Auroral Team
2017-10-01
In 2016 and 2017, the Cassini Saturn orbiter executed a final series of high-inclination, low-periapsis orbits ideal for studies of Saturn's polar regions. The Cassini Ultraviolet Imaging Spectrograph (UVIS) obtained an extensive set of auroral images, some at the highest spatial resolution obtained during Cassini's long orbital mission (2004-2017). In some cases, two or three spacecraft slews at right angles to the long slit of the spectrograph were required to cover the entire auroral region to form auroral images. We will present selected images from this set showing narrow arcs of emission, more diffuse auroral emissions, multiple auroral arcs in a single image, discrete spots of emission, small scale vortices, large-scale spiral forms, and parallel linear features that appear to cross in places like twisted wires. Some shorter features are transverse to the main auroral arcs, like barbs on a wire. UVIS observations were in some cases simultaneous with auroral observations from the Cassini Imaging Science Subsystem (ISS) the Cassini Visual and Infrared Mapping Spectrometer (VIMS), and the Hubble Space Telescope Space Telescope Imaging Spectrograph (STIS) that will also be presented.
SEASAT views oceans and sea ice with synthetic aperture radar
NASA Technical Reports Server (NTRS)
Fu, L. L.; Holt, B.
1982-01-01
Fifty-one SEASAT synthetic aperture radar (SAR) images of the oceans and sea ice are presented. Surface and internal waves, the Gulf Stream system and its rings and eddies, the eastern North Pacific, coastal phenomena, bathymetric features, atmospheric phenomena, and ship wakes are represented. Images of arctic pack and shore-fast ice are presented. The characteristics of the SEASAT SAR system and its image are described. Maps showing the area covered, and tables of key orbital information, and listing digitally processed images are provided.
NASA Astrophysics Data System (ADS)
Soliz, P.; Davis, B.; Murray, V.; Pattichis, M.; Barriga, S.; Russell, S.
2010-03-01
This paper presents an image processing technique for automatically categorize age-related macular degeneration (AMD) phenotypes from retinal images. Ultimately, an automated approach will be much more precise and consistent in phenotyping of retinal diseases, such as AMD. We have applied the automated phenotyping to retina images from a cohort of mono- and dizygotic twins. The application of this technology will allow one to perform more quantitative studies that will lead to a better understanding of the genetic and environmental factors associated with diseases such as AMD. A method for classifying retinal images based on features derived from the application of amplitude-modulation frequency-modulation (AM-FM) methods is presented. Retinal images from identical and fraternal twins who presented with AMD were processed to determine whether AM-FM could be used to differentiate between the two types of twins. Results of the automatic classifier agreed with the findings of other researchers in explaining the variation of the disease between the related twins. AM-FM features classified 72% of the twins correctly. Visual grading found that genetics could explain between 46% and 71% of the variance.
Segmenting texts from outdoor images taken by mobile phones using color features
NASA Astrophysics Data System (ADS)
Liu, Zongyi; Zhou, Hanning
2011-01-01
Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank
2013-10-15
Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS imagesmore » features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.« less
Estimating Missing Features to Improve Multimedia Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagherjeiran, A; Love, N S; Kamath, C
Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features.more » In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.« less
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.
Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.
Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.
ERIC Educational Resources Information Center
Wang, James Z.; Du, Yanping
Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…
Space Shuttle Columbia views the world with imaging radar: The SIR-A experiment
NASA Technical Reports Server (NTRS)
Ford, J. P.; Cimino, J. B.; Elachi, C.
1983-01-01
Images acquired by the Shuttle Imaging Radar (SIR-A) in November 1981, demonstrate the capability of this microwave remote sensor system to perceive and map a wide range of different surface features around the Earth. A selection of 60 scenes displays this capability with respect to Earth resources - geology, hydrology, agriculture, forest cover, ocean surface features, and prominent man-made structures. The combined area covered by the scenes presented amounts to about 3% of the total acquired. Most of the SIR-A images are accompanied by a LANDSAT multispectral scanner (MSS) or SEASAT synthetic-aperture radar (SAR) image of the same scene for comparison. Differences between the SIR-A image and its companion LANDSAT or SEASAT image at each scene are related to the characteristics of the respective imaging systems, and to seasonal or other changes that occurred in the time interval between acquisition of the images.
NASA Astrophysics Data System (ADS)
Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin
2010-10-01
It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.
Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek
2017-08-24
This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.
Special feature on imaging systems and techniques
NASA Astrophysics Data System (ADS)
Yang, Wuqiang; Giakos, George
2013-07-01
The IEEE International Conference on Imaging Systems and Techniques (IST'2012) was held in Manchester, UK, on 16-17 July 2012. The participants came from 26 countries or regions: Austria, Brazil, Canada, China, Denmark, France, Germany, Greece, India, Iran, Iraq, Italy, Japan, Korea, Latvia, Malaysia, Norway, Poland, Portugal, Sweden, Switzerland, Taiwan, Tunisia, UAE, UK and USA. The technical program of the conference consisted of a series of scientific and technical sessions, exploring physical principles, engineering and applications of new imaging systems and techniques, as reflected by the diversity of the submitted papers. Following a rigorous review process, a total of 123 papers were accepted, and they were organized into 30 oral presentation sessions and a poster session. In addition, six invited keynotes were arranged. The conference not only provided the participants with a unique opportunity to exchange ideas and disseminate research outcomes but also paved a way to establish global collaboration. Following the IST'2012, a total of 55 papers, which were technically extended substantially from their versions in the conference proceeding, were submitted as regular papers to this special feature of Measurement Science and Technology . Following a rigorous reviewing process, 25 papers have been finally accepted for publication in this special feature and they are organized into three categories: (1) industrial tomography, (2) imaging systems and techniques and (3) image processing. These papers not only present the latest developments in the field of imaging systems and techniques but also offer potential solutions to existing problems. We hope that this special feature provides a good reference for researchers who are active in the field and will serve as a catalyst to trigger further research. It has been our great pleasure to be the guest editors of this special feature. We would like to thank the authors for their contributions, without which it would not be possible to have this special feature published. We are grateful to all reviewers, who devoted their time and effort, on a voluntary basis, to ensure that all submissions were reviewed rigorously and fairly. The publishing staff of Measurement Science and Technology are particularly acknowledged for giving us timely advice on guest-editing this special feature.
Probability mapping of scarred myocardium using texture and intensity features in CMR images
2013-01-01
Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280
Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A
2015-07-01
Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Janaki Sathya, D.; Geetha, K.
2017-12-01
Automatic mass or lesion classification systems are developed to aid in distinguishing between malignant and benign lesions present in the breast DCE-MR images, the systems need to improve both the sensitivity and specificity of DCE-MR image interpretation in order to be successful for clinical use. A new classifier (a set of features together with a classification method) based on artificial neural networks trained using artificial fish swarm optimization (AFSO) algorithm is proposed in this paper. The basic idea behind the proposed classifier is to use AFSO algorithm for searching the best combination of synaptic weights for the neural network. An optimal set of features based on the statistical textural features is presented. The investigational outcomes of the proposed suspicious lesion classifier algorithm therefore confirm that the resulting classifier performs better than other such classifiers reported in the literature. Therefore this classifier demonstrates that the improvement in both the sensitivity and specificity are possible through automated image analysis.
Automated phenotype pattern recognition of zebrafish for high-throughput screening.
Schutera, Mark; Dickmeis, Thomas; Mione, Marina; Peravali, Ravindra; Marcato, Daniel; Reischl, Markus; Mikut, Ralf; Pylatiuk, Christian
2016-07-03
Over the last years, the zebrafish (Danio rerio) has become a key model organism in genetic and chemical screenings. A growing number of experiments and an expanding interest in zebrafish research makes it increasingly essential to automatize the distribution of embryos and larvae into standard microtiter plates or other sample holders for screening, often according to phenotypical features. Until now, such sorting processes have been carried out by manually handling the larvae and manual feature detection. Here, a prototype platform for image acquisition together with a classification software is presented. Zebrafish embryos and larvae and their features such as pigmentation are detected automatically from the image. Zebrafish of 4 different phenotypes can be classified through pattern recognition at 72 h post fertilization (hpf), allowing the software to classify an embryo into 2 distinct phenotypic classes: wild-type versus variant. The zebrafish phenotypes are classified with an accuracy of 79-99% without any user interaction. A description of the prototype platform and of the algorithms for image processing and pattern recognition is presented.
Textureless Macula Swelling Detection with Multiple Retinal Fundus Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul
2010-01-01
Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyse the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. Wemore » also present automatic algorithms to measure features from the reconstructed image which are useful in Point-of-Care automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalises the image; second, all available views are registered using non-morphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naiveheight- map of the macula. Results are presented on three sets of synthetic images and two sets of real world images. These preliminary tests show the ability to infer a minimum swelling of 300 microns and to correlate the reconstruction with the swollen location.« less
NASA Technical Reports Server (NTRS)
Haralick, R. H. (Principal Investigator); Bosley, R. J.
1974-01-01
The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving from a single image texture extraction procedure which uses spatial dependence matrices to measure relative co-occurrence of nearest neighbor grey tones, the cross-band texture procedure uses the distribution of neighboring grey tone N-tuple differences to measure the spatial interrelationships, or co-occurrences, of the grey tone N-tuples present in a texture pattern. In both procedures, texture is characterized in such a way as to be invariant under linear grey tone transformations. However, the cross-band procedure complements the single image procedure by extracting texture information and spectral information contained in ERTS multi-images. Classification experiments show that when used alone, without spectral processing, the cross-band texture procedure extracts more information than the single image texture analysis. Results show an improvement in average correct classification from 86.2% to 88.8% for ERTS image no. 1021-16333 with the cross-band texture procedure. However, when used together with spectral features, the single image texture plus spectral features perform better than the cross-band texture plus spectral features, with an average correct classification of 93.8% and 91.6%, respectively.
ERIC Educational Resources Information Center
Rosencwaig, Allan
1982-01-01
Thermal features of and beneath the surface of a sample can be detected and imaged with a thermal-wave microscope. Various methodologies for the excitation and detection of thermal waves are discussed, and several applications, primarily in microelectronics, are presented. (Author)
Seasat views North America, the Caribbean, and Western Europe with imaging radar
NASA Technical Reports Server (NTRS)
Ford, J. P.; Blom, R. G.; Bryan, M. L.; Daily, M.; Dixon, T. H.; Elachi, C.; Xenos, E. C.
1980-01-01
Forty-one digitally correlated Seasat synthetic-aperture radar images of land areas in North America, the Caribbean, and Western Europe are presented to demonstrate this microwave orbital imagery. The characteristics of the radar images, the types of information that can be extracted from them, and certain of their inherent distortions are briefly described. Each atlas scene covers an area of 90 X 90 kilometers, with the exception of the one that is the Nation's Capital. The scenes are grouped according to salient features of geology, hydrology and water resources, urban landcover, or agriculture. Each radar image is accompanied by a corresponding image in the optical or near-infrared range, or by a simple sketch map to illustrate features of interest. Characteristics of the Seasat radar imaging system are outlined.
Mendoza, Fernando; Valous, Nektarios A; Allen, Paul; Kenny, Tony A; Ward, Paddy; Sun, Da-Wen
2009-02-01
This paper presents a novel and non-destructive approach to the appearance characterization and classification of commercial pork, turkey and chicken ham slices. Ham slice images were modelled using directional fractal (DF(0°;45°;90°;135°)) dimensions and a minimum distance classifier was adopted to perform the classification task. Also, the role of different colour spaces and the resolution level of the images on DF analysis were investigated. This approach was applied to 480 wafer thin ham slices from four types of hams (120 slices per type): i.e., pork (cooked and smoked), turkey (smoked) and chicken (roasted). DF features were extracted from digitalized intensity images in greyscale, and R, G, B, L(∗), a(∗), b(∗), H, S, and V colour components for three image resolution levels (100%, 50%, and 25%). Simulation results show that in spite of the complexity and high variability in colour and texture appearance, the modelling of ham slice images with DF dimensions allows the capture of differentiating textural features between the four commercial ham types. Independent DF features entail better discrimination than that using the average of four directions. However, DF dimensions reveal a high sensitivity to colour channel, orientation and image resolution for the fractal analysis. The classification accuracy using six DF dimension features (a(90°)(∗),a(135°)(∗),H(0°),H(45°),S(0°),H(90°)) was 93.9% for training data and 82.2% for testing data.
NASA Astrophysics Data System (ADS)
Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron
2005-04-01
Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.
Improvement in the Accuracy of Matching by Different Feature Subspaces in Traffic Sign Recognition
NASA Astrophysics Data System (ADS)
Ihara, Arihito; Fujiyoshi, Hironobu; Takaki, Masanari; Kumon, Hiroaki; Tamatsu, Yukimasa
A technique for recognizing traffic signs from an image taken with an in-vehicle camera has already been proposed as driver's drive assist. SIFT feature is used for traffic sign recognition, because it is robust to changes in scaling and rotating of the traffic sign. However, it is difficult to process in real-time because the computation cost of the SIFT feature extraction and matching is expensive. This paper presents a method of traffic sign recognition based on keypoint classifier by AdaBoost using PCA-SIFT features in different feature subspaces. Each subspace is constructed from gradients of traffic sign images and general images respectively. A detected keypoint is projected to both subspaces, and then the AdaBoost employs to classy into whether the keypoint is on the traffic sign or not. Experimental results show that the computation cost for keypoint matching can be reduced to about 1/2 compared with the conventional method.
Noppeney, Uta; Price, Cathy J
2003-01-01
This paper considers how functional neuro-imaging can be used to investigate the organization of the semantic system and the limitations associated with this technique. The majority of the functional imaging studies of the semantic system have looked for divisions by varying stimulus category. These studies have led to divergent results and no clear anatomical hypotheses have emerged to account for the dissociations seen in behavioral studies. Only a few functional imaging studies have used task as a variable to differentiate the neural correlates of semantic features more directly. We extend these findings by presenting a new study that contrasts tasks that differentially weight sensory (color and taste) and verbally learned (origin) semantic features. Irrespective of the type of semantic feature retrieved, a common semantic system was activated as demonstrated in many previous studies. In addition, the retrieval of verbally learned, but not sensory-experienced, features enhanced activation in medial and lateral posterior parietal areas. We attribute these "verbally learned" effects to differences in retrieval strategy and conclude that evidence for segregation of semantic features at an anatomical level remains weak. We believe that functional imaging has the potential to increase our understanding of the neuronal infrastructure that sustains semantic processing but progress may require multiple experiments until a consistent explanatory framework emerges.
Onboard Image Registration from Invariant Features
NASA Technical Reports Server (NTRS)
Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C
2008-01-01
This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.
Image-based automatic recognition of larvae
NASA Astrophysics Data System (ADS)
Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai
2010-08-01
As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.
NASA Astrophysics Data System (ADS)
Zhang, Yunlu; Yan, Lei; Liou, Frank
2018-05-01
The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.
NASA Astrophysics Data System (ADS)
Arevalo, John; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.
Low-contrast underwater living fish recognition using PCANet
NASA Astrophysics Data System (ADS)
Sun, Xin; Yang, Jianping; Wang, Changgang; Dong, Junyu; Wang, Xinhua
2018-04-01
Quantitative and statistical analysis of ocean creatures is critical to ecological and environmental studies. And living fish recognition is one of the most essential requirements for fishery industry. However, light attenuation and scattering phenomenon are present in the underwater environment, which makes underwater images low-contrast and blurry. This paper tries to design a robust framework for accurate fish recognition. The framework introduces a two stage PCA Network to extract abstract features from fish images. On a real-world fish recognition dataset, we use a linear SVM classifier and set penalty coefficients to conquer data unbalanced issue. Feature visualization results show that our method can avoid the feature distortion in boundary regions of underwater image. Experiments results show that the PCA Network can extract discriminate features and achieve promising recognition accuracy. The framework improves the recognition accuracy of underwater living fishes and can be easily applied to marine fishery industry.
High-performance camera module for fast quality inspection in industrial printing applications
NASA Astrophysics Data System (ADS)
Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.
Diffraction enhance x-ray imaging for quantitative phase contrast studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, A. K.; Singh, B., E-mail: balwants@rrcat.gov.in; Kashyap, Y. S.
2016-05-23
Conventional X-ray imaging based on absorption contrast permits limited visibility of feature having small density and thickness variations. For imaging of weakly absorbing material or materials possessing similar densities, a novel phase contrast imaging techniques called diffraction enhanced imaging has been designed and developed at imaging beamline Indus-2 RRCAT Indore. The technique provides improved visibility of the interfaces and show high contrast in the image forsmall density or thickness gradients in the bulk. This paper presents basic principle, instrumentation and analysis methods for this technique. Initial results of quantitative phase retrieval carried out on various samples have also been presented.
NASA Astrophysics Data System (ADS)
Taruttis, Adrian; Herzog, Eva; Razansky, Daniel; Ntziachristos, Vasilis
2011-03-01
Multispectral Optoacoustic Tomography (MSOT) is an emerging technique for high resolution macroscopic imaging with optical and molecular contrast. We present cardiovascular imaging results from a multi-element real-time MSOT system recently developed for studies on small animals. Anatomical features relevant to cardiovascular disease, such as the carotid arteries, the aorta and the heart, are imaged in mice. The system's fast acquisition time, in tens of microseconds, allows images free of motion artifacts from heartbeat and respiration. Additionally, we present in-vivo detection of optical imaging agents, gold nanorods, at high spatial and temporal resolution, paving the way for molecular imaging applications.
Algorithms for Autonomous Plume Detection on Outer Planet Satellites
NASA Astrophysics Data System (ADS)
Lin, Y.; Bunte, M. K.; Saripalli, S.; Greeley, R.
2011-12-01
We investigate techniques for automated detection of geophysical events (i.e., volcanic plumes) from spacecraft images. The algorithms presented here have not been previously applied to detection of transient events on outer planet satellites. We apply Scale Invariant Feature Transform (SIFT) to raw images of Io and Enceladus from the Voyager, Galileo, Cassini, and New Horizons missions. SIFT produces distinct interest points in every image; feature descriptors are reasonably invariant to changes in illumination, image noise, rotation, scaling, and small changes in viewpoint. We classified these descriptors as plumes using the k-nearest neighbor (KNN) algorithm. In KNN, an object is classified by its similarity to examples in a training set of images based on user defined thresholds. Using the complete database of Io images and a selection of Enceladus images where 1-3 plumes were manually detected in each image, we successfully detected 74% of plumes in Galileo and New Horizons images, 95% in Voyager images, and 93% in Cassini images. Preliminary tests yielded some false positive detections; further iterations will improve performance. In images where detections fail, plumes are less than 9 pixels in size or are lost in image glare. We compared the appearance of plumes and illuminated mountain slopes to determine the potential for feature classification. We successfully differentiated features. An advantage over other methods is the ability to detect plumes in non-limb views where they appear in the shadowed part of the surface; improvements will enable detection against the illuminated background surface where gradient changes would otherwise preclude detection. This detection method has potential applications to future outer planet missions for sustained plume monitoring campaigns and onboard automated prioritization of all spacecraft data. The complementary nature of this method is such that it could be used in conjunction with edge detection algorithms to increase effectiveness. We have demonstrated an ability to detect transient events above the planetary limb and on the surface and to distinguish feature classes in spacecraft images.
The rheology and composition of cryovolcanic flows on icy satellites
NASA Technical Reports Server (NTRS)
Kargel, Jeffrey S.
1993-01-01
The rheologic properties of terrestrial lavas have been related to morphologic features of their flows, such as levees, banked surfaces, multilobate structures, and compressible folds. These features also have been used to determine rheologies and constrain the compositions of extraterrestrial flows. However, with rare exceptions, such features are not resolvable in Voyager images of the satellites of outer planets. Often only flow length and edge thickness of cryovolcanic flows can be measured reasonably accurately from Voyager images. The semiempirical lava-flow model presented here is a renewed effort to extract useful information from such measurements.
Rotation invariant fast features for large-scale recognition
NASA Astrophysics Data System (ADS)
Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd
2012-10-01
We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.
A memory learning framework for effective image retrieval.
Han, Junwei; Ngan, King N; Li, Mingjing; Zhang, Hong-Jiang
2005-04-01
Most current content-based image retrieval systems are still incapable of providing users with their desired results. The major difficulty lies in the gap between low-level image features and high-level image semantics. To address the problem, this study reports a framework for effective image retrieval by employing a novel idea of memory learning. It forms a knowledge memory model to store the semantic information by simply accumulating user-provided interactions. A learning strategy is then applied to predict the semantic relationships among images according to the memorized knowledge. Image queries are finally performed based on a seamless combination of low-level features and learned semantics. One important advantage of our framework is its ability to efficiently annotate images and also propagate the keyword annotation from the labeled images to unlabeled images. The presented algorithm has been integrated into a practical image retrieval system. Experiments on a collection of 10,000 general-purpose images demonstrate the effectiveness of the proposed framework.
A multimodal biometric authentication system based on 2D and 3D palmprint features
NASA Astrophysics Data System (ADS)
Aggithaya, Vivek K.; Zhang, David; Luo, Nan
2008-03-01
This paper presents a new personal authentication system that simultaneously exploits 2D and 3D palmprint features. Here, we aim to improve the accuracy and robustness of existing palmprint authentication systems using 3D palmprint features. The proposed system uses an active stereo technique, structured light, to capture 3D image or range data of the palm and a registered intensity image simultaneously. The surface curvature based method is employed to extract features from 3D palmprint and Gabor feature based competitive coding scheme is used for 2D representation. We individually analyze these representations and attempt to combine them with score level fusion technique. Our experiments on a database of 108 subjects achieve significant improvement in performance (Equal Error Rate) with the integration of 3D features as compared to the case when 2D palmprint features alone are employed.
New Infrared Emission Features and Spectral Variations in Ngc 7023
NASA Technical Reports Server (NTRS)
Werner, M. W.; Uchida, K. I.; Sellgren, K.; Marengo, M.; Gordon, K. D.; Morris, P. W.; Houck, J. R.; Stansberry, J. A.
2004-01-01
We observed the reflection nebula NGC 7023, with the Short-High module and the long-slit Short-Low and Long-Low modules of the Infrared Spectrograph on the Spitzer Space Telescope. We also present Infrared Array Camera (IRAC) and Multiband Imaging Photometer for Spitzer (MIPS) images of NGC 7023 at 3.6, 4.5, 8.0, and 24 m. We observe the aromatic emission features (AEFs) at 6.2, 7.7, 8.6, 11.3, and 12.7 m, plus a wealth of weaker features. We find new unidentified interstellar emission features at 6.7, 10.1, 15.8, 17.4, and 19.0 m. Possible identifications include aromatic hydrocarbons or nanoparticles of unknown mineralogy. We see variations in relative feature strengths, central wavelengths, and feature widths, in the AEFs and weaker emission features, depending on both distance from the star and nebular position (southeast vs. northwest).
Superpixel edges for boundary detection
Moya, Mary M.; Koch, Mark W.
2016-07-12
Various embodiments presented herein relate to identifying one or more edges in a synthetic aperture radar (SAR) image comprising a plurality of superpixels. Superpixels sharing an edge (or boundary) can be identified and one or more properties of the shared superpixels can be compared to determine whether the superpixels form the same or two different features. Where the superpixels form the same feature the edge is identified as an internal edge. Where the superpixels form two different features, the edge is identified as an external edge. Based upon classification of the superpixels, the external edge can be further determined to form part of a roof, wall, etc. The superpixels can be formed from a speckle-reduced SAR image product formed from a registered stack of SAR images, which is further segmented into a plurality of superpixels. The edge identification process is applied to the SAR image comprising the superpixels and edges.
Predicting perceptual quality of images in realistic scenario using deep filter banks
NASA Astrophysics Data System (ADS)
Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang
2018-03-01
Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.
Chowdhury, Debbrota Paul; Bakshi, Sambit; Guo, Guodong; Sa, Pankaj Kumar
2017-11-27
In this paper, an overall framework has been presented for person verification using ear biometric which uses tunable filter bank as local feature extractor. The tunable filter bank, based on a half-band polynomial of 14th order, extracts distinct features from ear images maintaining its frequency selectivity property. To advocate the applicability of tunable filter bank on ear biometrics, recognition test has been performed on available constrained databases like AMI, WPUT, IITD and unconstrained database like UERC. Experiments have been conducted applying tunable filter based feature extractor on subparts of the ear. Empirical experiments have been conducted with four and six subdivisions of the ear image. Analyzing the experimental results, it has been found that tunable filter moderately succeeds to distinguish ear features at par with the state-of-the-art features used for ear recognition. Accuracies of 70.58%, 67.01%, 81.98%, and 57.75% have been achieved on AMI, WPUT, IITD, and UERC databases through considering Canberra Distance as underlying measure of separation. The performances indicate that tunable filter is a candidate for recognizing human from ear images.
Deep features for efficient multi-biometric recognition with face and ear images
NASA Astrophysics Data System (ADS)
Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng
2017-07-01
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
Automated analysis and classification of melanocytic tumor on skin whole slide images.
Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal
2018-06-01
This paper presents a computer-aided technique for automated analysis and classification of melanocytic tumor on skin whole slide biopsy images. The proposed technique consists of four main modules. First, skin epidermis and dermis regions are segmented by a multi-resolution framework. Next, epidermis analysis is performed, where a set of epidermis features reflecting nuclear morphologies and spatial distributions is computed. In parallel with epidermis analysis, dermis analysis is also performed, where dermal cell nuclei are segmented and a set of textural and cytological features are computed. Finally, the skin melanocytic image is classified into different categories such as melanoma, nevus or normal tissue by using a multi-class support vector machine (mSVM) with extracted epidermis and dermis features. Experimental results on 66 skin whole slide images indicate that the proposed technique achieves more than 95% classification accuracy, which suggests that the technique has the potential to be used for assisting pathologists on skin biopsy image analysis and classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach
Aerts, Hugo J. W. L.; Velazquez, Emmanuel Rios; Leijenaar, Ralph T. H.; Parmar, Chintan; Grossmann, Patrick; Cavalho, Sara; Bussink, Johan; Monshouwer, René; Haibe-Kains, Benjamin; Rietveld, Derek; Hoebers, Frank; Rietbergen, Michelle M.; Leemans, C. René; Dekker, Andre; Quackenbush, John; Gillies, Robert J.; Lambin, Philippe
2014-01-01
Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost. PMID:24892406
Color Image Segmentation Based on Statistics of Location and Feature Similarity
NASA Astrophysics Data System (ADS)
Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi
The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.
Spectral-domain optical coherence tomography for endoscopic imaging
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Li, Qiao; Li, Wanhui; Wang, Yi; Yu, Daoyin
2007-02-01
Optical coherence tomography (OCT) is an emerging cross-sectional imaging technology. It uses broadband light sources to achieve axial image resolutions on the few micron scale. OCT is widely applied to medical imaging, it can get cross-sectional image of bio-tissue (transparent and turbid) with non-invasion and non-touch. In this paper, the principle of OCT is presented and the crucial parameters of the system are discussed in theory. With analysis of different methods and medical endoscopic system's feature, a design which combines the spectral domain OCT (SDOCT) technique and endoscopy is put forward. SDOCT provides direct access to the spectrum of the optical signal. It is shown to provide higher imaging speed when compared to time domain OCT. At the meantime, a novel OCT probe which uses advanced micromotor to drive reflecting prism is designed according to alimentary tract endoscopic feature. A simple optical coherence tomography system has been developed based on a fiber-based Michelson interferometer and spectrometer. An experiment which uses motor to drive prism to realize rotating imaging is done. Images obtained with this spectral interferometer are presented. The results verify the feasibility of endoscopic optical coherence tomography system with rotating scan.
Xu, Xinxing; Li, Wen; Xu, Dong
2015-12-01
In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.
Classification of CT examinations for COPD visual severity analysis
NASA Astrophysics Data System (ADS)
Tan, Jun; Zheng, Bin; Wang, Xingwei; Pu, Jiantao; Gur, David; Sciurba, Frank C.; Leader, J. Ken
2012-03-01
In this study we present a computational method of CT examination classification into visual assessed emphysema severity. The visual severity categories ranged from 0 to 5 and were rated by an experienced radiologist. The six categories were none, trace, mild, moderate, severe and very severe. Lung segmentation was performed for every input image and all image features are extracted from the segmented lung only. We adopted a two-level feature representation method for the classification. Five gray level distribution statistics, six gray level co-occurrence matrix (GLCM), and eleven gray level run-length (GLRL) features were computed for each CT image depicted segment lung. Then we used wavelets decomposition to obtain the low- and high-frequency components of the input image, and again extract from the lung region six GLCM features and eleven GLRL features. Therefore our feature vector length is 56. The CT examinations were classified using the support vector machine (SVM) and k-nearest neighbors (KNN) and the traditional threshold (density mask) approach. The SVM classifier had the highest classification performance of all the methods with an overall sensitivity of 54.4% and a 69.6% sensitivity to discriminate "no" and "trace visually assessed emphysema. We believe this work may lead to an automated, objective method to categorically classify emphysema severity on CT exam.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
SAR image segmentation using skeleton-based fuzzy clustering
NASA Astrophysics Data System (ADS)
Cao, Yun Yi; Chen, Yan Qiu
2003-06-01
SAR image segmentation can be converted to a clustering problem in which pixels or small patches are grouped together based on local feature information. In this paper, we present a novel framework for segmentation. The segmentation goal is achieved by unsupervised clustering upon characteristic descriptors extracted from local patches. The mixture model of characteristic descriptor, which combines intensity and texture feature, is investigated. The unsupervised algorithm is derived from the recently proposed Skeleton-Based Data Labeling method. Skeletons are constructed as prototypes of clusters to represent arbitrary latent structures in image data. Segmentation using Skeleton-Based Fuzzy Clustering is able to detect the types of surfaces appeared in SAR images automatically without any user input.
Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar
2017-01-01
This paper presents an automatic classification system for segregating pathological brain from normal brains in magnetic resonance imaging scanning. The proposed system employs contrast limited adaptive histogram equalization scheme to enhance the diseased region in brain MR images. Two-dimensional stationary wavelet transform is harnessed to extract features from the preprocessed images. The feature vector is constructed using the energy and entropy values, computed from the level- 2 SWT coefficients. Then, the relevant and uncorrelated features are selected using symmetric uncertainty ranking filter. Subsequently, the selected features are given input to the proposed AdaBoost with support vector machine classifier, where SVM is used as the base classifier of AdaBoost algorithm. To validate the proposed system, three standard MR image datasets, Dataset-66, Dataset-160, and Dataset- 255 have been utilized. The 5 runs of k-fold stratified cross validation results indicate the suggested scheme offers better performance than other existing schemes in terms of accuracy and number of features. The proposed system earns ideal classification over Dataset-66 and Dataset-160; whereas, for Dataset- 255, an accuracy of 99.45% is achieved. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Agile Multi-Scale Decompositions for Automatic Image Registration
NASA Technical Reports Server (NTRS)
Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline
2016-01-01
In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.
Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav
2014-03-01
Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.
von Braunmühl, T; Hartmann, D; Tietze, J K; Cekovic, D; Kunte, C; Ruzicka, T; Berking, C; Sattler, E C
2016-11-01
Optical coherence tomography (OCT) has become a valuable non-invasive tool in the in vivo diagnosis of non-melanoma skin cancer, especially of basal cell carcinoma (BCC). Due to an updated software-supported algorithm, a new en-face mode - similar to the horizontal en-face mode in high-definition OCT and reflectance confocal microscopy - surface-parallel imaging is possible which, in combination with the established slice mode of frequency domain (FD-)OCT, may offer additional information in the diagnosis of BCC. To define characteristic morphologic features of BCC using the new en-face mode in addition to the conventional cross-sectional imaging mode for three-dimensional imaging of BCC in FD-OCT. A total of 33 BCC were examined preoperatively by imaging in en-face mode as well as cross-sectional mode in FD-OCT. Characteristic features were evaluated and correlated with histopathology findings. Features established in the cross-sectional imaging mode as well as additional features were present in the en-face mode of FD-OCT: lobulated structures (100%), dark peritumoral rim (75%), bright peritumoral stroma (96%), branching vessels (90%), compressed fibrous bundles between lobulated nests ('star shaped') (78%), and intranodular small bright dots (51%). These features were also evaluated according to the histopathological subtype. In the en-face mode, the lobulated structures with compressed fibrous bundles of the BCC were more distinct than in the slice mode. FD-OCT with a new depiction for horizontal and vertical imaging modes offers additional information in the diagnosis of BCC, especially in nodular BCC, and enhances the possibility of the evaluation of morphologic tumour features. © 2016 European Academy of Dermatology and Venereology.
NASA Astrophysics Data System (ADS)
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki
2017-02-01
Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.
NASA Astrophysics Data System (ADS)
Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav
2017-07-01
A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.
Ultrasound and MRI of Pediatric Ocular Masses with Histopathologic Correlation
Brennan, Rachel C.; Wilson, Matthew W.; Kaste, Sue; Helton, Kathleen J.; McCarville, M. Beth
2012-01-01
We review our experience with unusual ocular pathologies mimicking retinoblastoma that were referred to our institution over the past two decades. After presenting the imaging anatomy of the normal eye, we discuss pertinent clinical and pathological features, and illustrate the ultrasound and magnetic resonance imaging appearance of retinoblastoma, medulloepithelioma, uveal melanoma, persistent fetal vasculature, Coats disease, corneal dermoid, retinal dysplasia and toxocara granuloma. Features useful in discriminating between these entities are emphasized. PMID:22466750
DEKF system for crowding estimation by a multiple-model approach
NASA Astrophysics Data System (ADS)
Cravino, F.; Dellucca, M.; Tesei, A.
1994-03-01
A distributed extended Kalman filter (DEKF) network devoted to real-time crowding estimation for surveillance in complex scenes is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Feature values are associated by virtual sensors with the estimated number of people using nonlinear models obtained in an off-line training phase. Different models are used, depending on the positions and dimensions of the crowded subareas detected in each image.
Correlative feature analysis on FFDM
Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene
2008-01-01
Identifying the corresponding images of a lesion in different views is an essential step in improving the diagnostic ability of both radiologists and computer-aided diagnosis (CAD) systems. Because of the nonrigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this pilot study, we present a computerized framework that differentiates between corresponding images of the same lesion in different views and noncorresponding images, i.e., images of different lesions. A dual-stage segmentation method, which employs an initial radial gradient index (RGI) based segmentation and an active contour model, is applied to extract mass lesions from the surrounding parenchyma. Then various lesion features are automatically extracted from each of the two views of each lesion to quantify the characteristics of density, size, texture and the neighborhood of the lesion, as well as its distance to the nipple. A two-step scheme is employed to estimate the probability that the two lesion images from different mammographic views are of the same physical lesion. In the first step, a correspondence metric for each pairwise feature is estimated by a Bayesian artificial neural network (BANN). Then, these pairwise correspondence metrics are combined using another BANN to yield an overall probability of correspondence. Receiver operating characteristic (ROC) analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing corresponding pairs from noncorresponding pairs. Using a FFDM database with 123 corresponding image pairs and 82 noncorresponding pairs, the distance feature yielded an area under the ROC curve (AUC) of 0.81±0.02 with leave-one-out (by physical lesion) evaluation, and the feature metric subset, which included distance, gradient texture, and ROI-based correlation, yielded an AUC of 0.87±0.02. The improvement by using multiple feature metrics was statistically significant compared to single feature performance. PMID:19175108
Feature-aided multiple target tracking in the image plane
NASA Astrophysics Data System (ADS)
Brown, Andrew P.; Sullivan, Kevin J.; Miller, David J.
2006-05-01
Vast quantities of EO and IR data are collected on airborne platforms (manned and unmanned) and terrestrial platforms (including fixed installations, e.g., at street intersections), and can be exploited to aid in the global war on terrorism. However, intelligent preprocessing is required to enable operator efficiency and to provide commanders with actionable target information. To this end, we have developed an image plane tracker which automatically detects and tracks multiple targets in image sequences using both motion and feature information. The effects of platform and camera motion are compensated via image registration, and a novel change detection algorithm is applied for accurate moving target detection. The contiguous pixel blob on each moving target is segmented for use in target feature extraction and model learning. Feature-based target location measurements are used for tracking through move-stop-move maneuvers, close target spacing, and occlusion. Effective clutter suppression is achieved using joint probabilistic data association (JPDA), and confirmed target tracks are indicated for further processing or operator review. In this paper we describe the algorithms implemented in the image plane tracker and present performance results obtained with video clips from the DARPA VIVID program data collection and from a miniature unmanned aerial vehicle (UAV) flight.
Natural image classification driven by human brain activity
NASA Astrophysics Data System (ADS)
Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao
2016-03-01
Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.
Detection of potential mosquito breeding sites based on community sourced geotagged images
NASA Astrophysics Data System (ADS)
Agarwal, Ankit; Chaudhuri, Usashi; Chaudhuri, Subhasis; Seetharaman, Guna
2014-06-01
Various initiatives have been taken all over the world to involve the citizens in the collection and reporting of data to make better and informed data-driven decisions. Our work shows how the geotagged images collected through the general population can be used to combat Malaria and Dengue by identifying and visualizing localities that contain potential mosquito breeding sites. Our method first employs image quality assessment on the client side to reject the images with distortions like blur and artifacts. Each geotagged image received on the server is converted into a feature vector using the bag of visual words model. We train an SVM classifier on a histogram-based feature vector obtained after the vector quantization of SIFT features to discriminate images containing either a small stagnant water body like puddle, or open containers and tires, bushes etc. from those that contain flowing water, manicured lawns, tires attached to a vehicle etc. A geographical heat map is generated by assigning a specific location a probability value of it being a potential mosquito breeding ground of mosquito using feature level fusion or the max approach presented in the paper. The heat map thus generated can be used by concerned health authorities to take appropriate action and to promote civic awareness.
CT and MR imaging features in phosphaturic mesenchymal tumor-mixed connective tissue: A case report
Shi, Zhenshan; Deng, Yiqiong; Li, Xiumei; Li, Yueming; Cao, Dairong; Coossa, Vikash Sahadeo
2018-01-01
Phosphaturic mesenchymal tumor-mixed connective tissue (PMT-MCT) is rare and usually benign and slow-growing. The majority of these tumors is associated with sporadic tumor-induced osteomalacia (TIO) or rickets, affect middle-aged individuals and are located in the extremities. Previous imaging studies often focused on seeking the causative tumors of TIO, not on the radiological features of these tumors, especially magnetic resonance imaging (MRI) features. PMT-MCT remains a largely misdiagnosed, ignored or unknown entity by most radiologists and clinicians. In the present case report, a review of the known literature of PMT-MCT was conducted and the CT and MRI findings from three patient cases were described for diagnosing the small subcutaneous tumor. Typical MRI appearances of PMT-MCT were isointense relative to the muscles on T1-weighted imaging, and markedly hyperintense on T2-weighted imaging containing variably flow voids, with markedly heterogeneous/homogenous enhancement on post contrast T1-weighted fat-suppression imaging. Short time inversion recovery was demonstrated to be the optimal sequence in localizing the tumor. PMID:29552133
CT and MR imaging features in phosphaturic mesenchymal tumor-mixed connective tissue: A case report.
Shi, Zhenshan; Deng, Yiqiong; Li, Xiumei; Li, Yueming; Cao, Dairong; Coossa, Vikash Sahadeo
2018-04-01
Phosphaturic mesenchymal tumor-mixed connective tissue (PMT-MCT) is rare and usually benign and slow-growing. The majority of these tumors is associated with sporadic tumor-induced osteomalacia (TIO) or rickets, affect middle-aged individuals and are located in the extremities. Previous imaging studies often focused on seeking the causative tumors of TIO, not on the radiological features of these tumors, especially magnetic resonance imaging (MRI) features. PMT-MCT remains a largely misdiagnosed, ignored or unknown entity by most radiologists and clinicians. In the present case report, a review of the known literature of PMT-MCT was conducted and the CT and MRI findings from three patient cases were described for diagnosing the small subcutaneous tumor. Typical MRI appearances of PMT-MCT were isointense relative to the muscles on T1-weighted imaging, and markedly hyperintense on T2-weighted imaging containing variably flow voids, with markedly heterogeneous/homogenous enhancement on post contrast T1-weighted fat-suppression imaging. Short time inversion recovery was demonstrated to be the optimal sequence in localizing the tumor.
Flame analysis using image processing techniques
NASA Astrophysics Data System (ADS)
Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng
2018-04-01
This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.
A neighboring structure reconstructed matching algorithm based on LARK features
NASA Astrophysics Data System (ADS)
Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa
2015-11-01
Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.
Xu, Xiayu; Ding, Wenxiang; Abràmoff, Michael D; Cao, Ruofan
2017-04-01
Retinal artery and vein classification is an important task for the automatic computer-aided diagnosis of various eye diseases and systemic diseases. This paper presents an improved supervised artery and vein classification method in retinal image. Intra-image regularization and inter-subject normalization is applied to reduce the differences in feature space. Novel features, including first-order and second-order texture features, are utilized to capture the discriminating characteristics of arteries and veins. The proposed method was tested on the DRIVE dataset and achieved an overall accuracy of 0.923. This retinal artery and vein classification algorithm serves as a potentially important tool for the early diagnosis of various diseases, including diabetic retinopathy and cardiovascular diseases. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Maier, Oskar; Wilms, Matthias; von der Gablentz, Janina; Krämer, Ulrike; Handels, Heinz
2014-03-01
Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer's discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature's and MR sequence's contribution.
Metric Learning for Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
Clinics in diagnostic imaging (163). Transient lateral patellar dislocation with trochlear dysplasia
Zhang, Junwei; Lee, Chin Hwee
2015-01-01
A 14-year-old girl presented with left knee pain and swelling after an injury. Magnetic resonance (MR) imaging showed a transient lateral patellar dislocation with patellar osteochondral fracture, medial patellofemoral ligament tear and underlying femoral trochlear dysplasia. Open reduction and internal fixation of the osteochondral fracture, plication of the medial patellar retinaculum and lateral release were performed. As lateral patellar dislocation is often clinically unsuspected, an understanding of its characteristic imaging features is important in making the diagnosis. Knowledge of the various predisposing factors for patellar instability may also influence the choice of surgical management. We also discuss signs of acute injury and chronic instability observed on MR imaging, and the imaging features of anatomical variants that predispose an individual to lateral patellar dislocation. Treatment options and postsurgical imaging appearances are also briefly described. PMID:26512145
Reduced reference image quality assessment via sub-image similarity based redundancy measurement
NASA Astrophysics Data System (ADS)
Mou, Xuanqin; Xue, Wufeng; Zhang, Lei
2012-03-01
The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.
A novel image retrieval algorithm based on PHOG and LSH
NASA Astrophysics Data System (ADS)
Wu, Hongliang; Wu, Weimin; Peng, Jiajin; Zhang, Junyuan
2017-08-01
PHOG can describe the local shape of the image and its relationship between the spaces. The using of PHOG algorithm to extract image features in image recognition and retrieval and other aspects have achieved good results. In recent years, locality sensitive hashing (LSH) algorithm has been superior to large-scale data in solving near-nearest neighbor problems compared with traditional algorithms. This paper presents a novel image retrieval algorithm based on PHOG and LSH. First, we use PHOG to extract the feature vector of the image, then use L different LSH hash table to reduce the dimension of PHOG texture to index values and map to different bucket, and finally extract the corresponding value of the image in the bucket for second image retrieval using Manhattan distance. This algorithm can adapt to the massive image retrieval, which ensures the high accuracy of the image retrieval and reduces the time complexity of the retrieval. This algorithm is of great significance.
A method of plane geometry primitive presentation
NASA Astrophysics Data System (ADS)
Jiao, Anbo; Luo, Haibo; Chang, Zheng; Hui, Bin
2014-11-01
Point feature and line feature are basic elements in object feature sets, and they play an important role in object matching and recognition. On one hand, point feature is sensitive to noise; on the other hand, there are usually a huge number of point features in an image, which makes it complex for matching. Line feature includes straight line segment and curve. One difficulty in straight line segment matching is the uncertainty of endpoint location, the other is straight line segment fracture problem or short straight line segments joined to form long straight line segment. While for the curve, in addition to the above problems, there is another difficulty in how to quantitatively describe the shape difference between curves. Due to the problems of point feature and line feature, the robustness and accuracy of target description will be affected; in this case, a method of plane geometry primitive presentation is proposed to describe the significant structure of an object. Firstly, two types of primitives are constructed, they are intersecting line primitive and blob primitive. Secondly, a line segment detector (LSD) is applied to detect line segment, and then intersecting line primitive is extracted. Finally, robustness and accuracy of the plane geometry primitive presentation method is studied. This method has a good ability to obtain structural information of the object, even if there is rotation or scale change of the object in the image. Experimental results verify the robustness and accuracy of this method.
Phenotype detection in morphological mutant mice using deformation features.
Roy, Sharmili; Liang, Xi; Kitamoto, Asanobu; Tamura, Masaru; Shiroishi, Toshihiko; Brown, Michael S
2013-01-01
Large-scale global efforts are underway to knockout each of the approximately 25,000 mouse genes and interpret their roles in shaping the mammalian embryo. Given the tremendous amount of data generated by imaging mutated prenatal mice, high-throughput image analysis systems are inevitable to characterize mammalian development and diseases. Current state-of-the-art computational systems offer only differential volumetric analysis of pre-defined anatomical structures between various gene-knockout mice strains. For subtle anatomical phenotypes, embryo phenotyping still relies on the laborious histological techniques that are clearly unsuitable in such big data environment. This paper presents a system that automatically detects known phenotypes and assists in discovering novel phenotypes in muCT images of mutant mice. Deformation features obtained from non-linear registration of mutant embryo to a normal consensus average image are extracted and analyzed to compute phenotypic and candidate phenotypic areas. The presented system is evaluated using C57BL/10 embryo images. All cases of ventricular septum defect and polydactyly, well-known to be present in this strain, are successfully detected. The system predicts potential phenotypic areas in the liver that are under active histological evaluation for possible phenotype of this mouse line.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.
2016-10-01
In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas B.; Jain, Manu; Cordova, Miguel A.; Kose, Kivanc; Rajadhyaksha, Milind; Halpern, Allan C.; Farkas, Daniel L.
2017-02-01
Motivation and background: Melanoma, the fastest growing cancer worldwide, kills more than one person every hour in the United States. Determining the depth and distribution of dermal melanin and hemoglobin adds physio-morphologic information to the current diagnostic standard, cellular morphology, to further develop noninvasive methods to discriminate between melanoma and benign skin conditions. Purpose: To compare the performance of a multimode dermoscopy system (SkinSpect), which is designed to quantify and map in three dimensions, in vivo melanin and hemoglobin in skin, and to validate this with histopathology and three dimensional reflectance confocal microscopy (RCM) imaging. Methods: Sequentially capture SkinSpect and RCM images of suspect lesions and nearby normal skin and compare this with histopathology reports, RCM imaging allows noninvasive observation of nuclear, cellular and structural detail in 1-5 μm-thin optical sections in skin, and detection of pigmented skin lesions with sensitivity of 90-95% and specificity of 70-80%. The multimode imaging dermoscope combines polarization (cross and parallel), autofluorescence and hyperspectral imaging to noninvasively map the distribution of melanin, collagen and hemoglobin oxygenation in pigmented skin lesions. Results: We compared in vivo features of ten melanocytic lesions extracted by SkinSpect and RCM imaging, and correlated them to histopathologic results. We present results of two melanoma cases (in situ and invasive), and compare with in vivo features from eight benign lesions. Melanin distribution at different depths and hemodynamics, including abnormal vascularity, detected by both SkinSpect and RCM will be discussed. Conclusion: Diagnostic features such as dermal melanin and hemoglobin concentration provided in SkinSpect skin analysis for melanoma and normal pigmented lesions can be compared and validated using results from RCM and histopathology.
ASPRS Digital Imagery Guideline Image Gallery Discussion
NASA Technical Reports Server (NTRS)
Ryan, Robert
2002-01-01
The objectives of the image gallery are to 1) give users and providers a simple means of identifying appropriate imagery for a given application/feature extraction; and 2) define imagery sufficiently to be described in engineering and acquisition terms. This viewgraph presentation includes a discussion of edge response and aliasing for image processing, and a series of images illustrating the effects of signal to noise ratio (SNR) on images. Another series of images illustrates how images are affected by varying the ground sample distances (GSD).
Vision-sensing image analysis for GTAW process control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, D.D.
1994-11-01
Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.
Yoo, Youngjin; Tang, Lisa Y W; Brosch, Tom; Li, David K B; Kolind, Shannon; Vavasour, Irene; Rauscher, Alexander; MacKay, Alex L; Traboulsee, Anthony; Tam, Roger C
2018-01-01
Myelin imaging is a form of quantitative magnetic resonance imaging (MRI) that measures myelin content and can potentially allow demyelinating diseases such as multiple sclerosis (MS) to be detected earlier. Although focal lesions are the most visible signs of MS pathology on conventional MRI, it has been shown that even tissues that appear normal may exhibit decreased myelin content as revealed by myelin-specific images (i.e., myelin maps). Current methods for analyzing myelin maps typically use global or regional mean myelin measurements to detect abnormalities, but ignore finer spatial patterns that may be characteristic of MS. In this paper, we present a machine learning method to automatically learn, from multimodal MR images, latent spatial features that can potentially improve the detection of MS pathology at early stage. More specifically, 3D image patches are extracted from myelin maps and the corresponding T1-weighted (T1w) MRIs, and are used to learn a latent joint myelin-T1w feature representation via unsupervised deep learning. Using a data set of images from MS patients and healthy controls, a common set of patches are selected via a voxel-wise t -test performed between the two groups. In each MS image, any patches overlapping with focal lesions are excluded, and a feature imputation method is used to fill in the missing values. A feature selection process (LASSO) is then utilized to construct a sparse representation. The resulting normal-appearing features are used to train a random forest classifier. Using the myelin and T1w images of 55 relapse-remitting MS patients and 44 healthy controls in an 11-fold cross-validation experiment, the proposed method achieved an average classification accuracy of 87.9% (SD = 8.4%), which is higher and more consistent across folds than those attained by regional mean myelin (73.7%, SD = 13.7%) and T1w measurements (66.7%, SD = 10.6%), or deep-learned features in either the myelin (83.8%, SD = 11.0%) or T1w (70.1%, SD = 13.6%) images alone, suggesting that the proposed method has strong potential for identifying image features that are more sensitive and specific to MS pathology in normal-appearing brain tissues.
IDH mutation assessment of glioma using texture features of multimodal MR images
NASA Astrophysics Data System (ADS)
Zhang, Xi; Tian, Qiang; Wu, Yu-Xia; Xu, Xiao-Pan; Li, Bao-Juan; Liu, Yi-Xiong; Liu, Yang; Lu, Hong-Bing
2017-03-01
Purpose: To 1) find effective texture features from multimodal MRI that can distinguish IDH mutant and wild status, and 2) propose a radiomic strategy for preoperatively detecting IDH mutation patients with glioma. Materials and Methods: 152 patients with glioma were retrospectively included from the Cancer Genome Atlas. Corresponding T1-weighted image before- and post-contrast, T2-weighted image and fluid-attenuation inversion recovery image from the Cancer Imaging Archive were analyzed. Specific statistical tests were applied to analyze the different kind of baseline information of LrGG patients. Finally, 168 texture features were derived from multimodal MRI per patient. Then the support vector machine-based recursive feature elimination (SVM-RFE) and classification strategy was adopted to find the optimal feature subset and build the identification models for detecting the IDH mutation. Results: Among 152 patients, 92 and 60 were confirmed to be IDH-wild and mutant, respectively. Statistical analysis showed that the patients without IDH mutation was significant older than patients with IDH mutation (p<0.01), and the distribution of some histological subtypes was significant different between IDH wild and mutant groups (p<0.01). After SVM-RFE, 15 optimal features were determined for IDH mutation detection. The accuracy, sensitivity, specificity, and AUC after SVM-RFE and parameter optimization were 82.2%, 85.0%, 78.3%, and 0.841, respectively. Conclusion: This study presented a radiomic strategy for noninvasively discriminating IDH mutation of patients with glioma. It effectively incorporated kinds of texture features from multimodal MRI, and SVM-based classification strategy. Results suggested that features selected from SVM-RFE were more potential to identifying IDH mutation. The proposed radiomics strategy could facilitate the clinical decision making in patients with glioma.
Land mine detection using multispectral image fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G.A.; Sengupta, S.K.; Aimonetti, W.D.
1995-03-29
Our system fuses information contained in registered images from multiple sensors to reduce the effects of clutter and improve the ability to detect surface and buried land mines. The sensor suite currently consists of a camera that acquires images in six bands (400nm, 500nm, 600nm, 700nm, 800nm and 900nm). Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a varietymore » of physical properties that are more separable in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, etc.) and some artifacts. We use a supervised learning pattern recognition approach to detecting the metal and plastic land mines. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in a two step process to classify a subimage. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the spectral bands add value to the detection system. The most important features from the various sensors are fused using a supervised learning pattern classifier (the probabilistic neural network). We present results of experiments to detect land mines from real data collected from an airborne platform, and evaluate the usefulness of fusing feature information from multiple spectral bands.« less
Enhanced Imaging of Corrosion in Aircraft Structures with Reverse Geometry X-ray(registered tm)
NASA Technical Reports Server (NTRS)
Winfree, William P.; Cmar-Mascis, Noreen A.; Parker, F. Raymond
2000-01-01
The application of Reverse Geometry X-ray to the detection and characterization of corrosion in aircraft structures is presented. Reverse Geometry X-ray is a unique system that utilizes an electronically scanned x-ray source and a discrete detector for real time radiographic imaging of a structure. The scanned source system has several advantages when compared to conventional radiography. First, the discrete x-ray detector can be miniaturized and easily positioned inside a complex structure (such as an aircraft wing) enabling images of each surface of the structure to be obtained separately. Second, using a measurement configuration with multiple detectors enables the simultaneous acquisition of data from several different perspectives without moving the structure or the measurement system. This provides a means for locating the position of flaws and enhances separation of features at the surface from features inside the structure. Data is presented on aircraft specimens with corrosion in the lap joint. Advanced laminographic imaging techniques utilizing data from multiple detectors are demonstrated to be capable of separating surface features from corrosion in the lap joint and locating the corrosion in multilayer structures. Results of this technique are compared to computed tomography cross sections obtained from a microfocus x-ray tomography system. A method is presented for calibration of the detectors of the Reverse Geometry X-ray system to enable quantification of the corrosion to within 2%.
NASA Astrophysics Data System (ADS)
Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun
2012-10-01
Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.
Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †
Choi, Jinwoo; Choi, Hyun-Taek
2017-01-01
This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status—i.e., the existence and identity (or name)—of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods—particle filtering and Bayesian feature estimation—are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented. PMID:28837068
Sajn, Luka; Kukar, Matjaž
2011-12-01
The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT
Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
NASA Astrophysics Data System (ADS)
Rice, J. B.; Strassmeier, K. G.; Kopf, M.
2011-02-01
We present Doppler images of the weak-lined T Tauri star V410 Tau obtained with two different Doppler-imaging codes. The images are consistent and show a cool extended spot, symmetric about the pole, at a temperature approximately 750 K below the average photospheric value. Smaller cool spots are found fairly uniformly distributed at latitudes below the polar cap with temperatures about 450 K below the average photospheric temperature. Resolution on the stellar surface is limited to about 7° of arc, so structure within these spots is not visible. Also at lower latitudes are hotter features with temperatures up to 1000 K above the photosphere. A trial Doppler image using a TiO molecular feature reproduced the cool polar cap at a temperature about 100 K below the value from the atomic line images. The equatorial features, however, were not properly reproduced since Doppler imaging relies on information in the wings of lines for reconstructing equatorial features, and for V410 Tau these molecular band lines overlap. In 1993, V410 Tau had a large photometric amplitude resulting from the concentration of cool spots on the hemisphere of the star visible at phase 0°, a phenomenon known as preferred longitude. In contrast, the small photometric amplitude observed currently is due to a strong symmetric polar spot and the uniform distribution in longitude of equatorial cool and warm spots. This redistribution of surface features may be the beginning of a slow "flip-flop" for V410 Tau where spot locations alternate between preferred longitudes. Flare events linked to two of the hotter spots in the Doppler image were observed.
NASA Astrophysics Data System (ADS)
Desai, Alok; Lee, Dah-Jye
2013-12-01
There has been significant research on the development of feature descriptors in the past few years. Most of them do not emphasize real-time applications. This paper presents the development of an affine invariant feature descriptor for low resource applications such as UAV and UGV that are equipped with an embedded system with a small microprocessor, a field programmable gate array (FPGA), or a smart phone device. UAV and UGV have proven suitable for many promising applications such as unknown environment exploration, search and rescue operations. These applications required on board image processing for obstacle detection, avoidance and navigation. All these real-time vision applications require a camera to grab images and match features using a feature descriptor. A good feature descriptor will uniquely describe a feature point thus allowing it to be correctly identified and matched with its corresponding feature point in another image. A few feature description algorithms are available for a resource limited system. They either require too much of the device's resource or too much simplification on the algorithm, which results in reduction in performance. This research is aimed at meeting the needs of these systems without sacrificing accuracy. This paper introduces a new feature descriptor called PRObabilistic model (PRO) for UGV navigation applications. It is a compact and efficient binary descriptor that is hardware-friendly and easy for implementation.
Thermal imaging as a biometrics approach to facial signature authentication.
Guzman, A M; Goryawala, M; Wang, Jin; Barreto, A; Andrian, J; Rishe, N; Adjouadi, M
2013-01-01
A new thermal imaging framework with unique feature extraction and similarity measurements for face recognition is presented. The research premise is to design specialized algorithms that would extract vasculature information, create a thermal facial signature and identify the individual. The proposed algorithm is fully integrated and consolidates the critical steps of feature extraction through the use of morphological operators, registration using the Linear Image Registration Tool and matching through unique similarity measures designed for this task. The novel approach at developing a thermal signature template using four images taken at various instants of time ensured that unforeseen changes in the vasculature over time did not affect the biometric matching process as the authentication process relied only on consistent thermal features. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using the similarity measures showed an average accuracy of 88.46% for skeletonized signatures and 90.39% for anisotropically diffused signatures. The highly accurate results obtained in the matching process clearly demonstrate the ability of the thermal infrared system to extend in application to other thermal imaging based systems. Empirical results applying this approach to an existing database of thermal images proves this assertion.
Combination of image descriptors for the exploration of cultural photographic collections
NASA Astrophysics Data System (ADS)
Bhowmik, Neelanjan; Gouet-Brunet, Valérie; Bloch, Gabriel; Besson, Sylvain
2017-01-01
The rapid growth of image digitization and collections in recent years makes it challenging and burdensome to organize, categorize, and retrieve similar images from voluminous collections. Content-based image retrieval (CBIR) is immensely convenient in this context. A considerable number of local feature detectors and descriptors are present in the literature of CBIR. We propose a model to anticipate the best feature combinations for image retrieval-related applications. Several spatial complementarity criteria of local feature detectors are analyzed and then engaged in a regression framework to find the optimal combination of detectors for a given dataset and are better adapted for each given image; the proposed model is also useful to optimally fix some other parameters, such as the k in k-nearest neighbor retrieval. Three public datasets of various contents and sizes are employed to evaluate the proposal, which is legitimized by improving the quality of retrieval notably facing classical approaches. Finally, the proposed image search engine is applied to the cultural photographic collections of a French museum, where it demonstrates its added value for the exploration and promotion of these contents at different levels from their archiving up to their exhibition in or ex situ.
Feature highlighting enhances learning of a complex natural-science category.
Miyatsu, Toshiya; Gouravajhala, Reshma; Nosofsky, Robert M; McDaniel, Mark A
2018-04-26
Learning naturalistic categories, which tend to have fuzzy boundaries and vary on many dimensions, can often be harder than learning well defined categories. One method for facilitating the category learning of naturalistic stimuli may be to provide explicit feature descriptions that highlight the characteristic features of each category. Although this method is commonly used in textbooks and classrooms, theoretically it remains uncertain whether feature descriptions should advantage learning complex natural-science categories. In three experiments, participants were trained on 12 categories of rocks, either without or with a brief description highlighting key features of each category. After training, they were tested on their ability to categorize both old and new rocks from each of the categories. Providing feature descriptions as a caption under a rock image failed to improve category learning relative to providing only the rock image with its category label (Experiment 1). However, when these same feature descriptions were presented such that they were explicitly linked to the relevant parts of the rock image (feature highlighting), participants showed significantly higher performance on both immediate generalization to new rocks (Experiment 2) and generalization after a 2-day delay (Experiment 3). Theoretical and practical implications are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Pituitary xanthogranulomas: clinical features, radiological appearances and post-operative outcomes.
Ved, R; Logier, N; Leach, P; Davies, J S; Hayhurst, C
2018-06-01
Xanthogranulomas are inflammatory masses most commonly found at peripheral sites such as the skin. Sellar and parasellar xanthogranulomas are rare and present a diagnostic challenge as they are difficult to differentiate from other sellar lesions such as craniopharyngiomas and Rathke's cleft cysts pre-operatively. Their radiological imaging features are yet to be clearly defined, and clinical outcomes after surgery are also uncertain. This study reviews clinical presentation, radiological appearances, and clinical outcomes in a cohort of patients with pituitary xanthogranulomas. A prospectively maintained pituitary surgery database was screened for histologically confirmed pituitary xanthogranulomas between May 2011-December 2016. Retrospective case note assessments were then performed by three independent reviewers. Patient demographics, clinical presentations, imaging, and clinical outcomes were analysed. During the study period 295 endoscopic endonasal pituitary surgeries were performed. Six patients had confirmed pituitary xanthogranulomas (2%). Patients most commonly presented with visual field deficits and/or endocrine dysfunction. Common imaging features included: a cystic consistency, hyperintensity on T1-weighted MR images, and contrast enhancement either peripherally (n = 3) or homogenously (n = 3). The most common pre-operative endocrine deficits were hyperprolactinaemia and hypoadrenalism (at least one of which was identified in 4/6 patients; 66%). Thirty-three percent (2/6) of patients presented with diabetes insipidus. The most common post-operative endocrinological deficits were adrenocortical dysfunction (66%) and gonadotropin deficiency (66%). Visual assessments normalised in all six patients post-operatively. Gross total resection was achieved in all patients, and at median follow up of 33.5 months there were no cases of tumour recurrence. The prevalence of pituitary xanthogranulomas in our series is higher than that suggested in the literature. Surgery restored normal vision to all cases, however four patients (67%) required long-term hormonal replacement post-operatively. Imaging features such peripheral rim enhancement, a suprasellar tumour epicentre, and the absence of both calcification or cavernous sinus invasion were identified as potential indicators that together should alert clinicians to the possibility of pituitary xanthogranuloma when assessing patients with cystic sellar and parasellar tumours.
ERIC Educational Resources Information Center
Greenberg, Seth N.; Goshen-Gottstein, Yonatan
2009-01-01
The present work considers the mental imaging of faces, with a focus in own-face imaging. Experiments 1 and 3 demonstrated an own-face disadvantage, with slower generation of mental images of one's own face than of other familiar faces. In contrast, Experiment 2 demonstrated that mental images of facial parts are generated more quickly for one's…
A computerized scheme of SARS detection in early stage based on chest image of digital radiograph
NASA Astrophysics Data System (ADS)
Zheng, Zhong; Lan, Rihui; Lv, Guozheng
2004-05-01
A computerized scheme for early severe acute respiratory syndrome(SARS) lesion detection in digital chest radiographs is presented in this paper. The total scheme consists of two main parts: the first part is to determine suspect lesions by the theory of locally orderless images(LOI) and their spatial features; the second part is to select real lesions among these suspect ones by their frequent features. The method we used in the second part is firstly developed by Katsuragawa et al with necessary modification. Preliminary results indicate that these features are good criterions to tell early SARS lesions apart from other normal lung structures.
New features in Saturn's atmosphere revealed by high-resolution thermal infrared images
NASA Technical Reports Server (NTRS)
Gezari, D. Y.; Mumma, M. J.; Espenak, F.; Deming, D.; Bjoraker, G.; Woods, L.; Folz, W.
1989-01-01
Observations of the stratospheric IR emission structure on Saturn are presented. The high-spatial-resolution global images show a variety of new features, including a narrow equatorial belt of enhanced emission at 7.8 micron, a prominent symmetrical north polar hotspot at all three wavelengths, and a midlatitude structure which is asymmetrically brightened at the east limb. The results confirm the polar brightening and reversal in position predicted by recent models for seasonal thermal variations of Saturn's stratosphere.
DSouza, Alisha V.; Lin, Huiyun; Henderson, Eric R.; Samkoe, Kimberley S.; Pogue, Brian W.
2016-01-01
Abstract. There is growing interest in using fluorescence imaging instruments to guide surgery, and the leading options for open-field imaging are reviewed here. While the clinical fluorescence-guided surgery (FGS) field has been focused predominantly on indocyanine green (ICG) imaging, there is accelerated development of more specific molecular tracers. These agents should help advance new indications for which FGS presents a paradigm shift in how molecular information is provided for resection decisions. There has been a steady growth in commercially marketed FGS systems, each with their own differentiated performance characteristics and specifications. A set of desirable criteria is presented to guide the evaluation of instruments, including: (i) real-time overlay of white-light and fluorescence images, (ii) operation within ambient room lighting, (iii) nanomolar-level sensitivity, (iv) quantitative capabilities, (v) simultaneous multiple fluorophore imaging, and (vi) ergonomic utility for open surgery. In this review, United States Food and Drug Administration 510(k) cleared commercial systems and some leading premarket FGS research systems were evaluated to illustrate the continual increase in this performance feature base. Generally, the systems designed for ICG-only imaging have sufficient sensitivity to ICG, but a fraction of the other desired features listed above, with both lower sensitivity and dynamic range. In comparison, the emerging research systems targeted for use with molecular agents have unique capabilities that will be essential for successful clinical imaging studies with low-concentration agents or where superior rejection of ambient light is needed. There is no perfect imaging system, but the feature differences among them are important differentiators in their utility, as outlined in the data and tables here. PMID:27533438
NASA Astrophysics Data System (ADS)
DSouza, Alisha V.; Lin, Huiyun; Henderson, Eric R.; Samkoe, Kimberley S.; Pogue, Brian W.
2016-08-01
There is growing interest in using fluorescence imaging instruments to guide surgery, and the leading options for open-field imaging are reviewed here. While the clinical fluorescence-guided surgery (FGS) field has been focused predominantly on indocyanine green (ICG) imaging, there is accelerated development of more specific molecular tracers. These agents should help advance new indications for which FGS presents a paradigm shift in how molecular information is provided for resection decisions. There has been a steady growth in commercially marketed FGS systems, each with their own differentiated performance characteristics and specifications. A set of desirable criteria is presented to guide the evaluation of instruments, including: (i) real-time overlay of white-light and fluorescence images, (ii) operation within ambient room lighting, (iii) nanomolar-level sensitivity, (iv) quantitative capabilities, (v) simultaneous multiple fluorophore imaging, and (vi) ergonomic utility for open surgery. In this review, United States Food and Drug Administration 510(k) cleared commercial systems and some leading premarket FGS research systems were evaluated to illustrate the continual increase in this performance feature base. Generally, the systems designed for ICG-only imaging have sufficient sensitivity to ICG, but a fraction of the other desired features listed above, with both lower sensitivity and dynamic range. In comparison, the emerging research systems targeted for use with molecular agents have unique capabilities that will be essential for successful clinical imaging studies with low-concentration agents or where superior rejection of ambient light is needed. There is no perfect imaging system, but the feature differences among them are important differentiators in their utility, as outlined in the data and tables here.
NASA Astrophysics Data System (ADS)
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
NASA Astrophysics Data System (ADS)
Liang, Yu-Li
Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory detection rate by using facial features and skin color model. To harness all the features in the scene, we further developed another system using multiple types of local descriptors along with Bag-of-Visual Word framework. In addition, an investigation of new contour feature in detecting obscene content is presented.
Unsupervised feature learning for autonomous rock image classification
NASA Astrophysics Data System (ADS)
Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond
2017-09-01
Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.
The Electronic View Box: a software tool for radiation therapy treatment verification.
Bosch, W R; Low, D A; Gerber, R L; Michalski, J M; Graham, M V; Perez, C A; Harms, W B; Purdy, J A
1995-01-01
We have developed a software tool for interactively verifying treatment plan implementation. The Electronic View Box (EVB) tool copies the paradigm of current practice but does so electronically. A portal image (online portal image or digitized port film) is displayed side by side with a prescription image (digitized simulator film or digitally reconstructed radiograph). The user can measure distances between features in prescription and portal images and "write" on the display, either to approve the image or to indicate required corrective actions. The EVB tool also provides several features not available in conventional verification practice using a light box. The EVB tool has been written in ANSI C using the X window system. The tool makes use of the Virtual Machine Platform and Foundation Library specifications of the NCI-sponsored Radiation Therapy Planning Tools Collaborative Working Group for portability into an arbitrary treatment planning system that conforms to these specifications. The present EVB tool is based on an earlier Verification Image Review tool, but with a substantial redesign of the user interface. A graphical user interface prototyping system was used in iteratively refining the tool layout to allow rapid modifications of the interface in response to user comments. Features of the EVB tool include 1) hierarchical selection of digital portal images based on physician name, patient name, and field identifier; 2) side-by-side presentation of prescription and portal images at equal magnification and orientation, and with independent grayscale controls; 3) "trace" facility for outlining anatomical structures; 4) "ruler" facility for measuring distances; 5) zoomed display of corresponding regions in both images; 6) image contrast enhancement; and 7) communication of portal image evaluation results (approval, block modification, repeat image acquisition, etc.). The EVB tool facilitates the rapid comparison of prescription and portal images and permits electronic communication of corrections in port shape and positioning.
Quantitative Wood Anatomy-Practical Guidelines.
von Arx, Georg; Crivellaro, Alan; Prendin, Angela L; Čufar, Katarina; Carrer, Marco
2016-01-01
Quantitative wood anatomy analyzes the variability of xylem anatomical features in trees, shrubs, and herbaceous species to address research questions related to plant functioning, growth, and environment. Among the more frequently considered anatomical features are lumen dimensions and wall thickness of conducting cells, fibers, and several ray properties. The structural properties of each xylem anatomical feature are mostly fixed once they are formed, and define to a large extent its functionality, including transport and storage of water, nutrients, sugars, and hormones, and providing mechanical support. The anatomical features can often be localized within an annual growth ring, which allows to establish intra-annual past and present structure-function relationships and its sensitivity to environmental variability. However, there are many methodological challenges to handle when aiming at producing (large) data sets of xylem anatomical data. Here we describe the different steps from wood sample collection to xylem anatomical data, provide guidance and identify pitfalls, and present different image-analysis tools for the quantification of anatomical features, in particular conducting cells. We show that each data production step from sample collection in the field, microslide preparation in the lab, image capturing through an optical microscope and image analysis with specific tools can readily introduce measurement errors between 5 and 30% and more, whereby the magnitude usually increases the smaller the anatomical features. Such measurement errors-if not avoided or corrected-may make it impossible to extract meaningful xylem anatomical data in light of the rather small range of variability in many anatomical features as observed, for example, within time series of individual plants. Following a rigid protocol and quality control as proposed in this paper is thus mandatory to use quantitative data of xylem anatomical features as a powerful source for many research topics.
Quantitative Wood Anatomy—Practical Guidelines
von Arx, Georg; Crivellaro, Alan; Prendin, Angela L.; Čufar, Katarina; Carrer, Marco
2016-01-01
Quantitative wood anatomy analyzes the variability of xylem anatomical features in trees, shrubs, and herbaceous species to address research questions related to plant functioning, growth, and environment. Among the more frequently considered anatomical features are lumen dimensions and wall thickness of conducting cells, fibers, and several ray properties. The structural properties of each xylem anatomical feature are mostly fixed once they are formed, and define to a large extent its functionality, including transport and storage of water, nutrients, sugars, and hormones, and providing mechanical support. The anatomical features can often be localized within an annual growth ring, which allows to establish intra-annual past and present structure-function relationships and its sensitivity to environmental variability. However, there are many methodological challenges to handle when aiming at producing (large) data sets of xylem anatomical data. Here we describe the different steps from wood sample collection to xylem anatomical data, provide guidance and identify pitfalls, and present different image-analysis tools for the quantification of anatomical features, in particular conducting cells. We show that each data production step from sample collection in the field, microslide preparation in the lab, image capturing through an optical microscope and image analysis with specific tools can readily introduce measurement errors between 5 and 30% and more, whereby the magnitude usually increases the smaller the anatomical features. Such measurement errors—if not avoided or corrected—may make it impossible to extract meaningful xylem anatomical data in light of the rather small range of variability in many anatomical features as observed, for example, within time series of individual plants. Following a rigid protocol and quality control as proposed in this paper is thus mandatory to use quantitative data of xylem anatomical features as a powerful source for many research topics. PMID:27375641
Texture-based approach to palmprint retrieval for personal identification
NASA Astrophysics Data System (ADS)
Li, Wenxin; Zhang, David; Xu, Z.; You, J.
2000-12-01
This paper presents a new approach to palmprint retrieval for personal identification. Three key issues in image retrieval are considered - feature selection, similarity measures and dynamic search for the best matching of the sample in the image database. We propose a texture-based method for palmprint feature representation. The concept of texture energy is introduced to define a palm print's global and local features, which are characterized with high convergence of inner-palm similarities and good dispersion of inter-palm discrimination. The search is carried out in a layered fashion: first global features are used to guide the fast selection of a small set of similar candidates from the database from the database and then local features are used to decide the final output within the candidate set. The experimental results demonstrate the effectiveness and accuracy of the proposed method.
Texture-based approach to palmprint retrieval for personal identification
NASA Astrophysics Data System (ADS)
Li, Wenxin; Zhang, David; Xu, Z.; You, J.
2001-01-01
This paper presents a new approach to palmprint retrieval for personal identification. Three key issues in image retrieval are considered - feature selection, similarity measures and dynamic search for the best matching of the sample in the image database. We propose a texture-based method for palmprint feature representation. The concept of texture energy is introduced to define a palm print's global and local features, which are characterized with high convergence of inner-palm similarities and good dispersion of inter-palm discrimination. The search is carried out in a layered fashion: first global features are used to guide the fast selection of a small set of similar candidates from the database from the database and then local features are used to decide the final output within the candidate set. The experimental results demonstrate the effectiveness and accuracy of the proposed method.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
An anti-disturbing real time pose estimation method and system
NASA Astrophysics Data System (ADS)
Zhou, Jian; Zhang, Xiao-hu
2011-08-01
Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
Cortical Superficial Siderosis in Different Types of Cerebral Small Vessel Disease.
Wollenweber, Frank Arne; Baykara, Ebru; Zedde, Marialuisa; Gesierich, Benno; Achmüller, Melanie; Jouvent, Eric; Viswanathan, Anand; Ropele, Stefan; Chabriat, Hugues; Schmidt, Reinhold; Opherk, Christian; Dichgans, Martin; Linn, Jennifer; Duering, Marco
2017-05-01
Cortical superficial siderosis (cSS) has emerged as a clinically relevant imaging feature of cerebral amyloid angiopathy (CAA). However, it remains unknown whether cSS is also present in nonamyloid-associated small vessel disease and whether patients with cSS differ in terms of other small vessel disease imaging features. Three hundred sixty-four CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy) patients, 372 population-based controls, and 100 CAA patients with cSS (fulfilling the modified Boston criteria for possible/probable CAA) were included. cSS and cerebral microbleeds were visually rated on T2*-weighted magnetic resonance imaging. White matter hyperintensities were segmented on fluid-attenauted inversion recovery images, and their spatial distribution was compared between groups using colocalization analysis. Cerebral microbleeds location was determined in an observer-independent way using an atlas in standard space. cSS was absent in CADASIL and present in only 2 population-based controls (0.5%). Cerebral microbleeds were present in 64% of CAA patients with cSS, 34% of patients with CADASIL, and 12% of population-based controls. Among patients with cerebral microbleeds, lobar location was found in 95% of CAA patients with cSS, 48% of CADASIL patients, and 69% of population-based controls. The spatial distribution of white matter hyperintensities was comparable between CAA with cSS and CADASIL as indicated by high colocalization coefficients. cSS was absent in CADASIL, whereas other small vessel disease imaging features were similar to CAA patients with cSS. Our findings suggest that cSS in combination with other small vessel disease imaging markers is highly indicative of CAA. © 2017 American Heart Association, Inc.
Computer-aided-diagnosis (CAD) for colposcopy
NASA Astrophysics Data System (ADS)
Lange, Holger; Ferris, Daron G.
2005-04-01
Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method, whereby a physician (colposcopist) visually inspects the lower genital tract (cervix, vulva and vagina), with special emphasis on the subjective appearance of metaplastic epithelium comprising the transformation zone on the cervix. Cervical cancer precursor lesions and invasive cancer exhibit certain distinctly abnormal morphologic features. Lesion characteristics such as margin; color or opacity; blood vessel caliber, intercapillary spacing and distribution; and contour are considered by colposcopists to derive a clinical diagnosis. Clinicians and academia have suggested and shown proof of concept that automated image analysis of cervical imagery can be used for cervical cancer screening and diagnosis, having the potential to have a direct impact on improving women"s health care and reducing associated costs. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD. At the heart of ColpoCAD is a complex multi-sensor, multi-data and multi-feature image analysis system. A functional description is presented of the envisioned ColpoCAD system, broken down into: Modality Data Management System, Image Enhancement, Feature Extraction, Reference Database, and Diagnosis and directed Biopsies. The system design and development process of the image analysis system is outlined. The system design provides a modular and open architecture built on feature based processing. The core feature set includes the visual features used by colposcopists. This feature set can be extended to include new features introduced by new instrument technologies, like fluorescence and impedance, and any other plausible feature that can be extracted from the cervical data. Preliminary results of our research on detecting the three most important features: blood vessel structures, acetowhite regions and lesion margins are shown. As this is a new and very complex field in medical image processing, the hope is that this paper can provide a framework and basis to encourage and facilitate collaboration and discussion between industry, academia, and medical practitioners.
Ben Chaabane, Salim; Fnaiech, Farhat
2014-01-23
Color image segmentation has been so far applied in many areas; hence, recently many different techniques have been developed and proposed. In the medical imaging area, the image segmentation may be helpful to provide assistance to doctor in order to follow-up the disease of a certain patient from the breast cancer processed images. The main objective of this work is to rebuild and also to enhance each cell from the three component images provided by an input image. Indeed, from an initial segmentation obtained using the statistical features and histogram threshold techniques, the resulting segmentation may represent accurately the non complete and pasted cells and enhance them. This allows real help to doctors, and consequently, these cells become clear and easy to be counted. A novel method for color edges extraction based on statistical features and automatic threshold is presented. The traditional edge detector, based on the first and the second order neighborhood, describing the relationship between the current pixel and its neighbors, is extended to the statistical domain. Hence, color edges in an image are obtained by combining the statistical features and the automatic threshold techniques. Finally, on the obtained color edges with specific primitive color, a combination rule is used to integrate the edge results over the three color components. Breast cancer cell images were used to evaluate the performance of the proposed method both quantitatively and qualitatively. Hence, a visual and a numerical assessment based on the probability of correct classification (PC), the false classification (Pf), and the classification accuracy (Sens(%)) are presented and compared with existing techniques. The proposed method shows its superiority in the detection of points which really belong to the cells, and also the facility of counting the number of the processed cells. Computer simulations highlight that the proposed method substantially enhances the segmented image with smaller error rates better than other existing algorithms under the same settings (patterns and parameters). Moreover, it provides high classification accuracy, reaching the rate of 97.94%. Additionally, the segmentation method may be extended to other medical imaging types having similar properties.
Fetal anterior abdominal wall defects: prenatal imaging by magnetic resonance imaging.
Victoria, Teresa; Andronikou, Savvas; Bowen, Diana; Laje, Pablo; Weiss, Dana A; Johnson, Ann M; Peranteau, William H; Canning, Douglas A; Adzick, N Scott
2018-04-01
Abdominal wall defects range from the mild umbilical cord hernia to the highly complex limb-body wall syndrome. The most common defects are gastroschisis and omphalocele, and the rarer ones include the exstrophy complex, pentalogy of Cantrell and limb-body wall syndrome. Although all have a common feature of viscera herniation through a defect in the anterior body wall, their imaging features and, more important, postnatal management, differ widely. Correct diagnosis of each entity is imperative in order to achieve appropriate and accurate prenatal counseling and postnatal management. In this paper, we discuss fetal abdominal wall defects and present diagnostic pearls to aid with diagnosis.
Hyperspectral imaging with wavelet transform for classification of colon tissue biopsy samples
NASA Astrophysics Data System (ADS)
Masood, Khalid
2008-08-01
Automatic classification of medical images is a part of our computerised medical imaging programme to support the pathologists in their diagnosis. Hyperspectral data has found its applications in medical imagery. Its usage is increasing significantly in biopsy analysis of medical images. In this paper, we present a histopathological analysis for the classification of colon biopsy samples into benign and malignant classes. The proposed study is based on comparison between 3D spectral/spatial analysis and 2D spatial analysis. Wavelet textural features in the wavelet domain are used in both these approaches for classification of colon biopsy samples. Experimental results indicate that the incorporation of wavelet textural features using a support vector machine, in 2D spatial analysis, achieve best classification accuracy.
Visual feature extraction from voxel-weighted averaging of stimulus images in 2 fMRI studies.
Hart, Corey B; Rose, William J
2013-11-01
Multiple studies have provided evidence for distributed object representation in the brain, with several recent experiments leveraging basis function estimates for partial image reconstruction from fMRI data. Using a novel combination of statistical decomposition, generalized linear models, and stimulus averaging on previously examined image sets and Bayesian regression of recorded fMRI activity during presentation of these data sets, we identify a subset of relevant voxels that appear to code for covarying object features. Using a technique we term "voxel-weighted averaging," we isolate image filters that these voxels appear to implement. The results, though very cursory, appear to have significant implications for hierarchical and deep-learning-type approaches toward the understanding of neural coding and representation.
Cranio-orbital primary intraosseous haemangioma.
Gupta, T; Rose, G E; Manisali, M; Minhas, P; Uddin, J M; Verity, D H
2013-11-01
Primary intraosseous haemangioma (IOH) is a rare benign neoplasm presenting in the fourth and fifth decades of life. The spine and skull are the most commonly involved, orbital involvement is extremely rare. We describe six patients with cranio-orbital IOH, the largest case series to date. Retrospective review of six patients with histologically confirmed primary IOH involving the orbit. Clinical characteristics, imaging features, approach to management, and histopathological findings are described. Five patients were male with a median age of 56. Pain and diplopia were the most common presenting features. A characteristic 'honeycomb' pattern on CT imaging was demonstrated in three of the cases. Complete surgical excision was performed in all cases with presurgical embolisation carried out in one case. In all the cases, histological studies identified cavernous vascular spaces within the bony tissue. These channels were lined by single layer of cytologically normal endothelial cells. IOCH of the cranio-orbital region is rare; in the absence of typical imaging features, the differential diagnosis includes chondroma, chondrosarcoma, bony metastasis, and lymphoma. Surgical excision may be necessary to exclude more sinister pathology. Intraoperative haemorrhage can be severe and may be reduced by preoperative embolisation.
Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch
Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed
2012-01-01
Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043
Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.
Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim
2012-10-22
Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.
Detection of hypertensive retinopathy using vessel measurements and textural features.
Agurto, Carla; Joshi, Vinayak; Nemeth, Sheila; Soliz, Peter; Barriga, Simon
2014-01-01
Features that indicate hypertensive retinopathy have been well described in the medical literature. This paper presents a new system to automatically classify subjects with hypertensive retinopathy (HR) using digital color fundus images. Our method consists of the following steps: 1) normalization and enhancement of the image; 2) determination of regions of interest based on automatic location of the optic disc; 3) segmentation of the retinal vasculature and measurement of vessel width and tortuosity; 4) extraction of color features; 5) classification of vessel segments as arteries or veins; 6) calculation of artery-vein ratios using the six widest (major) vessels for each category; 7) calculation of mean red intensity and saturation values for all arteries; 8) calculation of amplitude-modulation frequency-modulation (AM-FM) features for entire image; and 9) classification of features into HR and non-HR using linear regression. This approach was tested on 74 digital color fundus photographs taken with TOPCON and CANON retinal cameras using leave-one out cross validation. An area under the ROC curve (AUC) of 0.84 was achieved with sensitivity and specificity of 90% and 67%, respectively.
NASA Technical Reports Server (NTRS)
Barrett, Eamon B. (Editor); Pearson, James J. (Editor)
1989-01-01
Image understanding concepts and models, image understanding systems and applications, advanced digital processors and software tools, and advanced man-machine interfaces are among the topics discussed. Particular papers are presented on such topics as neural networks for computer vision, object-based segmentation and color recognition in multispectral images, the application of image algebra to image measurement and feature extraction, and the integration of modeling and graphics to create an infrared signal processing test bed.
Lu, Guolan; Wang, Dongsheng; Qin, Xulei; Halig, Luma; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Pogue, Brian W; Chen, Zhuo Georgia; Fei, Baowei
2015-01-01
Hyperspectral imaging (HSI) is an imaging modality that holds strong potential for rapid cancer detection during image-guided surgery. But the data from HSI often needs to be processed appropriately in order to extract the maximum useful information that differentiates cancer from normal tissue. We proposed a framework for hyperspectral image processing and quantification, which includes a set of steps including image preprocessing, glare removal, feature extraction, and ultimately image classification. The framework has been tested on images from mice with head and neck cancer, using spectra from 450- to 900-nm wavelength. The image analysis computed Fourier coefficients, normalized reflectance, mean, and spectral derivatives for improved accuracy. The experimental results demonstrated the feasibility of the hyperspectral image processing and quantification framework for cancer detection during animal tumor surgery, in a challenging setting where sensitivity can be low due to a modest number of features present, but potential for fast image classification can be high. This HSI approach may have potential application in tumor margin assessment during image-guided surgery, where speed of assessment may be the dominant factor.
NASA Astrophysics Data System (ADS)
Lu, Guolan; Wang, Dongsheng; Qin, Xulei; Halig, Luma; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Pogue, Brian W.; Chen, Zhuo Georgia; Fei, Baowei
2015-12-01
Hyperspectral imaging (HSI) is an imaging modality that holds strong potential for rapid cancer detection during image-guided surgery. But the data from HSI often needs to be processed appropriately in order to extract the maximum useful information that differentiates cancer from normal tissue. We proposed a framework for hyperspectral image processing and quantification, which includes a set of steps including image preprocessing, glare removal, feature extraction, and ultimately image classification. The framework has been tested on images from mice with head and neck cancer, using spectra from 450- to 900-nm wavelength. The image analysis computed Fourier coefficients, normalized reflectance, mean, and spectral derivatives for improved accuracy. The experimental results demonstrated the feasibility of the hyperspectral image processing and quantification framework for cancer detection during animal tumor surgery, in a challenging setting where sensitivity can be low due to a modest number of features present, but potential for fast image classification can be high. This HSI approach may have potential application in tumor margin assessment during image-guided surgery, where speed of assessment may be the dominant factor.
ELM: super-resolution analysis of wide-field images of fluorescent shell structures.
Manton, James; Xiao, Yao; Turner, Robert; Christie, Graham; Rees, Eric
2018-05-04
It is often necessary to precisely quantify the size of specimens in biological studies. When measuring feature size in fluorescence microscopy, significant biases can arise due to blurring of its edges if the feature is smaller than the diffraction limit of resolution. This problem is avoided if an equation describing the feature's entire image is fitted to its image data. In this paper we present open-source software, ELM, which uses this approach to measure the size of spheroidal or cylindrical fluorescent shells with a precision of around 10 nm. This has been used to measure coat protein locations in bacterial spores and cell wall diameter in vegetative bacilli, and may also be valuable in microbiological studies of algae, fungi and viruses. ELM is available for download at https://github.com/quantitativeimaging/ELM. Creative Commons Attribution license.
Matching CCD images to a stellar catalog using locality-sensitive hashing
NASA Astrophysics Data System (ADS)
Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu
2018-02-01
The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.
Thermography based diagnosis of ruptured anterior cruciate ligament (ACL) in canines
NASA Astrophysics Data System (ADS)
Lama, Norsang; Umbaugh, Scott E.; Mishra, Deependra; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph
2016-09-01
Anterior cruciate ligament (ACL) rupture in canines is a common orthopedic injury in veterinary medicine. Veterinarians use both imaging and non-imaging methods to diagnose the disease. Common imaging methods such as radiography, computed tomography (CT scan) and magnetic resonance imaging (MRI) have some disadvantages: expensive setup, high dose of radiation, and time-consuming. In this paper, we present an alternative diagnostic method based on feature extraction and pattern classification (FEPC) to diagnose abnormal patterns in ACL thermograms. The proposed method was experimented with a total of 30 thermograms for each camera view (anterior, lateral and posterior) including 14 disease and 16 non-disease cases provided from Long Island Veterinary Specialists. The normal and abnormal patterns in thermograms are analyzed in two steps: feature extraction and pattern classification. Texture features based on gray level co-occurrence matrices (GLCM), histogram features and spectral features are extracted from the color normalized thermograms and the computed feature vectors are applied to Nearest Neighbor (NN) classifier, K-Nearest Neighbor (KNN) classifier and Support Vector Machine (SVM) classifier with leave-one-out validation method. The algorithm gives the best classification success rate of 86.67% with a sensitivity of 85.71% and a specificity of 87.5% in ACL rupture detection using NN classifier for the lateral view and Norm-RGB-Lum color normalization method. Our results show that the proposed method has the potential to detect ACL rupture in canines.
Radiomics-based features for pattern recognition of lung cancer histopathology and metastases.
Ferreira Junior, José Raniery; Koenigkam-Santos, Marcel; Cipriano, Federico Enrique Garcia; Fabro, Alexandre Todorovic; Azevedo-Marques, Paulo Mazzoncini de
2018-06-01
lung cancer is the leading cause of cancer-related deaths in the world, and its poor prognosis varies markedly according to tumor staging. Computed tomography (CT) is the imaging modality of choice for lung cancer evaluation, being used for diagnosis and clinical staging. Besides tumor stage, other features, like histopathological subtype, can also add prognostic information. In this work, radiomics-based CT features were used to predict lung cancer histopathology and metastases using machine learning models. local image datasets of confirmed primary malignant pulmonary tumors were retrospectively evaluated for testing and validation. CT images acquired with same protocol were semiautomatically segmented. Tumors were characterized by clinical features and computer attributes of intensity, histogram, texture, shape, and volume. Three machine learning classifiers used up to 100 selected features to perform the analysis. radiomics-based features yielded areas under the receiver operating characteristic curve of 0.89, 0.97, and 0.92 at testing and 0.75, 0.71, and 0.81 at validation for lymph nodal metastasis, distant metastasis, and histopathology pattern recognition, respectively. the radiomics characterization approach presented great potential to be used in a computational model to aid lung cancer histopathological subtype diagnosis as a "virtual biopsy" and metastatic prediction for therapy decision support without the necessity of a whole-body imaging scanning. Copyright © 2018 Elsevier B.V. All rights reserved.
Neuromuscular disease classification system
NASA Astrophysics Data System (ADS)
Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen
2013-06-01
Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.
Wen, Zaidao; Hou, Zaidao; Jiao, Licheng
2017-11-01
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
Nanthagopal, A Padma; Rajamony, R Sukanesh
2012-07-01
The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.
NASA Astrophysics Data System (ADS)
Rightsell, Chris; Mimun, Lawrence C.; Kumar, Ajith G.; Sardar, Dhiraj K.
2015-03-01
Nanomaterials with multiple functionalities play a very important role in several high technology applications. A major area of such applications is the biomedical industry, where contrast agents with multiple imaging modalities can provide better results than conventional materials. Many of the contrast agents available now have drawbacks such as toxicity, photobleaching, low contrast, size restrictions, and overall cost of the imaging system. Rare-earth doped inorganic nanophosphors are alternatives to circumvent several of these issues, together with the added advantage of super high resolution imaging due to the excellent near infrared sensitivity of the phosphors. In addition to optical imaging features, by adding a magnetic ion such as Gd3+ at suitable lattice positions, the phosphor can be made magnetic, yielding dual imaging functionalities. In this research, we are presenting the optical and magnetic imaging features of sub-nanometer size MPO4:Nd3+ (M=Ca, Gd) phosphors for the potential application of these nanophosphors as multimodal contrast agents. Cytotoxicity, in vitro and in vivo imaging, penetration depth etc. are studied for various phosphor compositions, and optimized compositions are explored. This research was funded by the National Science Foundation Partnerships for Research and Education in Materials (NSF-PREM) Grant N0-DMR-0934218.
Texture analysis of high-resolution FLAIR images for TLE
NASA Astrophysics Data System (ADS)
Jafari-Khouzani, Kourosh; Soltanian-Zadeh, Hamid; Elisevich, Kost
2005-04-01
This paper presents a study of the texture information of high-resolution FLAIR images of the brain with the aim of determining the abnormality and consequently the candidacy of the hippocampus for temporal lobe epilepsy (TLE) surgery. Intensity and volume features of the hippocampus from FLAIR images of the brain have been previously shown to be useful in detecting the abnormal hippocampus in TLE. However, the small size of the hippocampus may limit the texture information. High-resolution FLAIR images show more details of the abnormal intensity variations of the hippocampi and therefore are more suitable for texture analysis. We study and compare the low and high-resolution FLAIR images of six epileptic patients. The hippocampi are segmented manually by an expert from T1-weighted MR images. Then the segmented regions are mapped on the corresponding FLAIR images for texture analysis. The 2-D wavelet transforms of the hippocampi are employed for feature extraction. We compare the ability of the texture features from regular and high-resolution FLAIR images to distinguish normal and abnormal hippocampi. Intracranial EEG results as well as surgery outcome are used as gold standard. The results show that the intensity variations of the hippocampus are related to the abnormalities in the TLE.
An Integrated Ransac and Graph Based Mismatch Elimination Approach for Wide-Baseline Image Matching
NASA Astrophysics Data System (ADS)
Hasheminasab, M.; Ebadi, H.; Sedaghat, A.
2015-12-01
In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.
Acharya, U Rajendra; Sree, S Vinitha; Krishnan, M Muthu Rama; Molinari, Filippo; Zieleźnik, Witold; Bardales, Ricardo H; Witkowska, Agnieszka; Suri, Jasjit S
2014-02-01
Computer-aided diagnostic (CAD) techniques aid physicians in better diagnosis of diseases by extracting objective and accurate diagnostic information from medical data. Hashimoto thyroiditis is the most common type of inflammation of the thyroid gland. The inflammation changes the structure of the thyroid tissue, and these changes are reflected as echogenic changes on ultrasound images. In this work, we propose a novel CAD system (a class of systems called ThyroScan) that extracts textural features from a thyroid sonogram and uses them to aid in the detection of Hashimoto thyroiditis. In this paradigm, we extracted grayscale features based on stationary wavelet transform from 232 normal and 294 Hashimoto thyroiditis-affected thyroid ultrasound images obtained from a Polish population. Significant features were selected using a Student t test. The resulting feature vectors were used to build and evaluate the following 4 classifiers using a 10-fold stratified cross-validation technique: support vector machine, decision tree, fuzzy classifier, and K-nearest neighbor. Using 7 significant features that characterized the textural changes in the images, the fuzzy classifier had the highest classification accuracy of 84.6%, sensitivity of 82.8%, specificity of 87.0%, and a positive predictive value of 88.9%. The proposed ThyroScan CAD system uses novel features to noninvasively detect the presence of Hashimoto thyroiditis on ultrasound images. Compared to manual interpretations of ultrasound images, the CAD system offers a more objective interpretation of the nature of the thyroid. The preliminary results presented in this work indicate the possibility of using such a CAD system in a clinical setting after evaluating it with larger databases in multicenter clinical trials.
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Grouping of optic flow stimuli during binocular rivalry is driven by monocular information.
Holten, Vivian; Stuit, Sjoerd M; Verstraten, Frans A J; van der Smagt, Maarten J
2016-10-01
During binocular rivalry, perception alternates between two dissimilar images, presented dichoptically. Although binocular rivalry is thought to result from competition at a local level, neighboring image parts with similar features tend to be perceived together for longer durations than image parts with dissimilar features. This simultaneous dominance of two image parts is called grouping during rivalry. Previous studies have shown that this grouping depends on a shared eye-of-origin to a much larger extent than on image content, irrespective of the complexity of a static image. In the current study, we examine whether grouping of dynamic optic flow patterns is also primarily driven by monocular (eye-of-origin) information. In addition, we examine whether image parameters, such as optic flow direction, and partial versus full visibility of the optic flow pattern, affect grouping durations during rivalry. The results show that grouping of optic flow is, as is known for static images, primarily affected by its eye-of-origin. Furthermore, global motion can affect grouping durations, but only under specific conditions. Namely, only when the two full optic flow patterns were presented locally. These results suggest that grouping during rivalry is primarily driven by monocular information even for motion stimuli thought to rely on higher-level motion areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cassini UVIS Observations of Saturn during the Grand Finale Orbits
NASA Astrophysics Data System (ADS)
Pryor, W. R.; Esposito, L. W.; West, R. A.; Jouchoux, A.; Radioti, A.; Grodent, D. C.; Gerard, J. C. M. C.; Gustin, J.; Lamy, L.; Badman, S. V.
2017-12-01
In 2016 and 2017, the Cassini Saturn orbiter executed a final series of high inclination, low-periapsis orbits ideal for studies of Saturn's polar regions. The Cassini Ultraviolet Imaging Spectrograph (UVIS) obtained an extensive set of auroral images, some at the highest spatial resolution obtained during Cassini's long orbital mission (2004-2017). In some cases, two or three spacecraft slews at right angles to the long slit of the spectrograph were required to cover the entire auroral region to form auroral images. We will present selected images from this set showing narrow arcs of emission, more diffuse auroral emissions, multiple auroral arcs in a single image, discrete spots of emission, small scale vortices, large-scale spiral forms, and parallel linear features that appear to cross in places like twisted wires. Some shorter features are transverse to the main auroral arcs, like barbs on a wire. UVIS observations were in some cases simultaneous with auroral observations from the Hubble Space Telescope Space Telescope Imaging Spectrograph (STIS) that will also be presented. UVIS polar images also contain spectral information suitable for studies of the auroral electron energy distribution. The long wavelength part of the UVIS polar images contains a signal from reflected sunlight containing absorption signatures of acetylene and other Saturn hydrocarbons. The hydrocarbon spatial distribution will also be examined.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
Facebook photo activity associated with body image disturbance in adolescent girls.
Meier, Evelyn P; Gray, James
2014-04-01
The present study examined the relationship between body image and adolescent girls' activity on the social networking site (SNS) Facebook (FB). Research has shown that elevated Internet "appearance exposure" is positively correlated with increased body image disturbance among adolescent girls, and there is a particularly strong association with FB use. The present study sought to replicate and extend upon these findings by identifying the specific FB features that correlate with body image disturbance in adolescent girls. A total of 103 middle and high school females completed questionnaire measures of total FB use, specific FB feature use, weight dissatisfaction, drive for thinness, thin ideal internalization, appearance comparison, and self-objectification. An appearance exposure score was calculated based on subjects' use of FB photo applications relative to total FB use. Elevated appearance exposure, but not overall FB usage, was significantly correlated with weight dissatisfaction, drive for thinness, thin ideal internalization, and self-objectification. Implications for eating disorder prevention programs and best practices in researching SNSs are discussed.
Passive range estimation for rotorcraft low-altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, B.; Suorsa, R.; Hussien, B.
1991-01-01
The automation of rotorcraft low-altitude flight presents challenging problems in control, computer vision and image understanding. A critical element in this problem is the ability to detect and locate obstacles, using on-board sensors, and modify the nominal trajectory. This requirement is also necessary for the safe landing of an autonomous lander on Mars. This paper examines some of the issues in the location of objects using a sequence of images from a passive sensor, and describes a Kalman filter approach to estimate the range to obstacles. The Kalman filter is also used to track features in the images leading to a significant reduction of search effort in the feature extraction step of the algorithm. The method can compute range for both straight line and curvilinear motion of the sensor. A laboratory experiment was designed to acquire a sequence of images along with sensor motion parameters under conditions similar to helicopter flight. Range estimation results using this imagery are presented.
Oriented modulation for watermarking in direct binary search halftone images.
Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der
2012-09-01
In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.
Hepatitis Diagnosis Using Facial Color Image
NASA Astrophysics Data System (ADS)
Liu, Mingjia; Guo, Zhenhua
Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.
NASA Technical Reports Server (NTRS)
Edwards, M. H.; Arvidson, R. E.; Guinness, E. A.
1984-01-01
The problem of displaying information on the seafloor morphology is attacked by utilizing digital image processing techniques to generate images for Seabeam data covering three young seamounts on the eastern flank of the East Pacific Rise. Errors in locations between crossing tracks are corrected by interactively identifying features and translating tracks relative to a control track. Spatial interpolation techniques using moving averages are used to interpolate between gridded depth values to produce images in shaded relief and color-coded forms. The digitally processed images clarify the structural control on seamount growth and clearly show the lateral extent of volcanic materials, including the distribution and fault control of subsidiary volcanic constructional features. The image presentations also clearly show artifacts related to both residual navigational errors and to depth or location differences that depend on ship heading relative to slope orientation in regions with steep slopes.
Lossless Compression of JPEG Coded Photo Collections.
Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng
2016-04-06
The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.
Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification
Pham, Tuan D.
2014-01-01
The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744
Automated Analysis of Fluorescence Microscopy Images to Identify Protein-Protein Interactions
Venkatraman, S.; Doktycz, M. J.; Qi, H.; ...
2006-01-01
The identification of protein interactions is important for elucidating biological networks. One obstacle in comprehensive interaction studies is the analyses of large datasets, particularly those containing images. Development of an automated system to analyze an image-based protein interaction dataset is needed. Such an analysis system is described here, to automatically extract features from fluorescence microscopy images obtained from a bacterial protein interaction assay. These features are used to relay quantitative values that aid in the automated scoring of positive interactions. Experimental observations indicate that identifying at least 50% positive cells in an image is sufficient to detect a protein interaction.more » Based on this criterion, the automated system presents 100% accuracy in detecting positive interactions for a dataset of 16 images. Algorithms were implemented using MATLAB and the software developed is available on request from the authors.« less
Automatic crack detection and classification method for subway tunnel safety monitoring.
Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun
2014-10-16
Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.
Automated Coronal Loop Identification Using Digital Image Processing Techniques
NASA Technical Reports Server (NTRS)
Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.
2003-01-01
The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.
Mitochondrial Encephalomyopathy With Lactic Acidosis and Stroke-Like Episodes—MELAS Syndrome
Henry, Caitlin; Patel, Neema; Shaffer, William; Murphy, Lillian; Park, Joe
2017-01-01
Background: Mitochondrial encephalomyopathy with lactic acidosis and stroke-like episodes (MELAS) syndrome is a rare inherited disorder that results in waxing and waning nervous system and muscle dysfunction. MELAS syndrome may overlap with other neurologic disorders but shows distinctive imaging features. Case Report: We present the case of a 28-year-old female with atypical stroke-like symptoms, a strong family history of stroke-like symptoms, and a relapsing-remitting course for several years. We discuss the imaging features distinctive to the case, the mechanism of the disease, typical presentation, imaging diagnosis, and disease management. Conclusion: This case is a classic example of the relapse-remitting MELAS syndrome progression with episodic clinical flares and fluctuating patterns of stroke-like lesions on imaging. MELAS is an important diagnostic consideration when neuroimaging reveals a pattern of disappearing and relapsing cortical brain lesions that may occur in different areas of the brain and are not necessarily limited to discrete vascular territories. Future studies should investigate disease mechanisms at the cellular level and the value of advanced magnetic resonance imaging techniques for a targeted approach to therapy. PMID:29026367
Mitochondrial Encephalomyopathy With Lactic Acidosis and Stroke-Like Episodes-MELAS Syndrome.
Henry, Caitlin; Patel, Neema; Shaffer, William; Murphy, Lillian; Park, Joe; Spieler, Bradley
2017-01-01
Mitochondrial encephalomyopathy with lactic acidosis and stroke-like episodes (MELAS) syndrome is a rare inherited disorder that results in waxing and waning nervous system and muscle dysfunction. MELAS syndrome may overlap with other neurologic disorders but shows distinctive imaging features. We present the case of a 28-year-old female with atypical stroke-like symptoms, a strong family history of stroke-like symptoms, and a relapsing-remitting course for several years. We discuss the imaging features distinctive to the case, the mechanism of the disease, typical presentation, imaging diagnosis, and disease management. This case is a classic example of the relapse-remitting MELAS syndrome progression with episodic clinical flares and fluctuating patterns of stroke-like lesions on imaging. MELAS is an important diagnostic consideration when neuroimaging reveals a pattern of disappearing and relapsing cortical brain lesions that may occur in different areas of the brain and are not necessarily limited to discrete vascular territories. Future studies should investigate disease mechanisms at the cellular level and the value of advanced magnetic resonance imaging techniques for a targeted approach to therapy.
Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring
Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun
2014-01-01
Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification. PMID:25325337
Content-based image retrieval by matching hierarchical attributed region adjacency graphs
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Thies, Christian J.; Guld, Mark O.; Lehmann, Thomas M.
2004-05-01
Content-based image retrieval requires a formal description of visual information. In medical applications, all relevant biological objects have to be represented by this description. Although color as the primary feature has proven successful in publicly available retrieval systems of general purpose, this description is not applicable to most medical images. Additionally, it has been shown that global features characterizing the whole image do not lead to acceptable results in the medical context or that they are only suitable for specific applications. For a general purpose content-based comparison of medical images, local, i.e. regional features that are collected on multiple scales must be used. A hierarchical attributed region adjacency graph (HARAG) provides such a representation and transfers image comparison to graph matching. However, building a HARAG from an image requires a restriction in size to be computationally feasible while at the same time all visually plausible information must be preserved. For this purpose, mechanisms for the reduction of the graph size are presented. Even with a reduced graph, the problem of graph matching remains NP-complete. In this paper, the Similarity Flooding approach and Hopfield-style neural networks are adapted from the graph matching community to the needs of HARAG comparison. Based on synthetic image material build from simple geometric objects, all visually similar regions were matched accordingly showing the framework's general applicability to content-based image retrieval of medical images.
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Smirnova, Z. N.
2015-05-01
Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.
Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.
2015-01-01
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069
Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang
2016-07-01
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.
Alexander, Nathan S; Palczewska, Grazyna; Palczewski, Krzysztof
2015-08-01
Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE.
Lempicki, Marta; Rothenbuhler, Anya; Merzoug, Valérie; Franchi-Abella, Stéphanie; Chaussain, Catherine; Adamsbaum, Catherine; Linglart, Agnès
2017-01-01
X-linked hypophosphatemic rickets (XLH) is the most common form of inheritable rickets. Rickets treatment is monitored by assessing alkaline phosphatase (ALP) levels, clinical features, and radiographs. Our objectives were to describe the magnetic resonance imaging (MRI) features of XLH and to assess correlations with disease activity. Twenty-seven XLH patients (median age 9.2 years) were included in this prospective single-center observational study. XLH activity was assessed using height, leg bowing, dental abscess history, and serum ALP levels. We looked for correlations between MRI features and markers of disease activity. On MRI, the median maximum width of the physis was 5.6 mm (range 4.8-7.8; normal <1.5), being >1.5 mm in all of the patients. The appearance of the zone of provisional calcification was abnormal on 21 MRI images (78%), Harris lines were present on 24 (89%), and bone marrow signal abnormalities were present on 16 (59%). ALP levels correlated with the maximum physeal widening and with the transverse extent of the widening. MRI of the knee provides precise rickets patterns that are correlated with ALP, an established biochemical marker of the disease, avoiding X-ray exposure and providing surrogate quantitative markers of disease activity. © 2017 S. Karger AG, Basel.
Intracranial solitary fibrous tumor: imaging findings.
Clarençon, Frédéric; Bonneville, Fabrice; Rousseau, Audrey; Galanaud, Damien; Kujas, Michèle; Naggara, Olivier; Cornu, Philippe; Chiras, Jacques
2011-11-01
To study the neuroimaging features of intracranial solitary fibrous tumors (ISFTs). Retrospective study of neuroimaging features of 9 consecutive histopathologically proven ISFT cases. Location, size, shape, density, signal intensity and gadolinium uptake were studied at CT and MRI. Data collected from diffusion-weighted imaging (DWI) (3 patients), perfusion imaging and MR spectroscopy (2 patients), and DSA (4 patients) were also analyzed. The tumors most frequently arose from the intracranial meninges (7/9), while the other lesions were intraventricular. Tumor size ranged from 2.5 to 10 cm (mean=6.6 cm). They presented multilobular shape in 6/9 patients. Most ISFTs were heterogeneous (7/9) with areas of low T2 signal intensity that strongly enhanced after gadolinium administration (6/8). Erosion of the skull was present in about half of the cases (4/9). Components with decreased apparent diffusion coefficient were seen in 2/3 ISFTs on DWI. Spectroscopy revealed elevated peaks of choline and myo-inositol. MR perfusion showed features of hyperperfusion. ISFT should be considered in cases of extra-axial, supratentorial, heterogeneous, hypervascular tumor. Areas of low T2 signal intensity that strongly enhance after gadolinium injection are suggestive of this diagnosis. Restricted diffusion and elevated peak of myo-inositol may be additional valuable features. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Effective method for detecting regions of given colors and the features of the region surfaces
NASA Astrophysics Data System (ADS)
Gong, Yihong; Zhang, HongJiang
1994-03-01
Color can be used as a very important cue for image recognition. In industrial and commercial areas, color is widely used as a trademark or identifying feature in objects, such as packaged goods, advertising signs, etc. In image database systems, one may retrieve an image of interest by specifying prominent colors and their locations in the image (image retrieval by contents). These facts enable us to detect or identify a target object using colors. However, this task depends mainly on how effectively we can identify a color and detect regions of the given color under possibly non-uniform illumination conditions such as shade, highlight, and strong contrast. In this paper, we present an effective method to detect regions matching given colors, along with the features of the region surfaces. We adopt the HVC color coordinates in the method because of its ability of completely separating the luminant and chromatic components of colors. Three basis functions functionally serving as the low-pass, high-pass, and band-pass filters, respectively, are introduced.
Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains
Souza, Junior Silva; da Silva, Gercina Gonçalves
2016-01-01
The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper. PMID:27276196
Recognition and classification of colon cells applying the ensemble of classifiers.
Kruk, M; Osowski, S; Koktysz, R
2009-02-01
The paper presents the application of an ensemble of classifiers for the recognition of colon cells on the basis of the microscope colon image. The solved task include: segmentation of the individual cells from the image using the morphological operations, the preprocessing stages, leading to the extraction of features, selection of the most important features, and the classification stage applying the classifiers arranged in the form of ensemble. The paper presents and discusses the results concerning the recognition of four most important colon cell types: eosinophylic granulocyte, neutrophilic granulocyte, lymphocyte and plasmocyte. The proposed system is able to recognize the cells with the accuracy comparable to the human expert (around 5% of discrepancy of both results).
Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat
2017-09-01
Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.
Relative Pose Estimation Using Image Feature Triplets
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Rottensteiner, F.; Heipke, C.
2015-03-01
A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.
Feature Tracking for High Speed AFM Imaging of Biopolymers.
Hartman, Brett; Andersson, Sean B
2018-03-31
The scanning speed of atomic force microscopes continues to advance with some current commercial microscopes achieving on the order of one frame per second and at least one reaching 10 frames per second. Despite the success of these instruments, even higher frame rates are needed with scan ranges larger than are currently achievable. Moreover, there is a significant installed base of slower instruments that would benefit from algorithmic approaches to increasing their frame rate without requiring significant hardware modifications. In this paper, we present an experimental demonstration of high speed scanning on an existing, non-high speed instrument, through the use of a feedback-based, feature-tracking algorithm that reduces imaging time by focusing on features of interest to reduce the total imaging area. Experiments on both circular and square gratings, as well as silicon steps and DNA strands show a reduction in imaging time by a factor of 3-12 over raster scanning, depending on the parameters chosen.
Image steganalysis using Artificial Bee Colony algorithm
NASA Astrophysics Data System (ADS)
Sajedi, Hedieh
2017-09-01
Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.
Xu, Jun; Luo, Xiaofei; Wang, Guanhao; Gilmore, Hannah; Madabhushi, Anant
2016-01-01
Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions. PMID:28154470
Xu, Jun; Luo, Xiaofei; Wang, Guanhao; Gilmore, Hannah; Madabhushi, Anant
2016-05-26
Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.
NASA Astrophysics Data System (ADS)
Orton, Glenn; Hansen, Candice; Momary, Thomas; Caplinger, Michael; Ravine, Michael; Atreya, Sushil; Ingersoll, Andrew; Bolton, Scott; Rogers, John; Eichstaedt, Gerald
2017-04-01
Juno's visible imager, JunoCam, is a wide-angle camera (58° field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm, designed for optimal imaging of Jupiter's poles. Juno's elliptical polar orbit offers unique views of Jupiter's polar regions with spatial scales as good as 50 km/pixel. At closest approach ("perijove") the images have spatial scale down to ˜3 km/pixel. As a push-frame imager on a rotating spacecraft, JunoCam uses time-delayed integration to take advantage of the spacecraft spin to extend integration time to increase signal. Images of Jupiter's poles reveal a largely uncharted region of Jupiter, as nearly all earlier spacecraft except Pioneer 11 have orbited or flown by close to the equatorial plane. Poleward of 64-68° planetocentric latitude, Jupiter's familiar east-west banded structure breaks down. Several types of discrete features appear on a darker, bluish-cast background. Clusters of circular cyclonic spirals are found immediately around the north and south poles. Oval-shaped features are also present, ranging in size down to JunoCam's resolution limits. The largest and brightest features usually have chaotic shapes; animations over ˜1 hour can reveal cyclonic motion in them. Narrow linear features traverse tens of degrees of longitude and are not confined in latitude. JunoCam also detected optically thin clouds or hazes that are illuminated beyond the nightside ˜1-bar terminator; one of these detected at Perijove lay some 3 scale heights above the main cloud deck. Tests have been made to detect the aurora and lightning. Most close-up images of Jupiter have been acquired at lower latitudes within 2 hours of closest approach. These images aid in understanding the data collected by other instruments on Juno that probe deeper in the atmosphere. When Jupiter was too close to the sun for ground-based observers to collect data between perijoves 1 and 2, JunoCam took a sequence of routine images to monitor large-scale features, which fortuitously yielded the earliest images of a very energetic outbreak on the rapid jet at 24°N. Images taken around perijove 3 (PJ3) allow a closer inspection of the outbreak features in a later state of evolution. Methane band images covering both polar regions within about four hours, around PJ3, show the shape and extent of the polar-haze features from favorable vantage points. Occasional, opportunistic images of the Galilean moons and the ring system were also acquired.
NASA Astrophysics Data System (ADS)
Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.
2011-02-01
In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.
Comparisons of Supergranule Properties from SDO/HMI with Other Datasets
NASA Technical Reports Server (NTRS)
Pesnell, William Dean; Williams, Peter E.
2010-01-01
While supergranules, a component of solar convection, have been well studied through the use of Dopplergrams, other datasets also exhibit these features. Quiet Sun magnetograms show local magnetic field elements distributed around the boundaries of supergranule cells, notably clustering at the common apex points of adjacent cells, while more solid cellular features are seen near active regions. Ca II K images are notable for exhibiting the chromospheric network representing a cellular distribution of local magnetic field lines across the solar disk that coincides with supergranulation boundaries. Measurements at 304 A further above the solar surface also show a similar pattern to the chromospheric network, but the boundaries are more nebulous in nature. While previous observations of these different solar features were obtained with a variety of instruments, SDO provides a single platform, from which the relevant data products at a high cadence and high-definition image quality are delivered. The images may also be cross-referenced due to their coincidental time of observation. We present images of these different solar features from HMI & AIA and use them to make composite images of supergranules at different atmospheric layers in which they manifest. We also compare each data product to equivalent data from previous observations, for example HMI magnetograms with those from MDI.
Deep Learning for Lowtextured Image Matching
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.
2018-05-01
Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
Using deep learning for detecting gender in adult chest radiographs
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2018-03-01
In this paper, we present a method for automatically identifying the gender of an imaged person using their frontal chest x-ray images. Our work is motivated by the need to determine missing gender information in some datasets. The proposed method employs the technique of convolutional neural network (CNN) based deep learning and transfer learning to overcome the challenge of developing handcrafted features in limited data. Specifically, the method consists of four main steps: pre-processing, CNN feature extractor, feature selection, and classifier. The method is tested on a combined dataset obtained from several sources with varying acquisition quality resulting in different pre-processing steps that are applied for each. For feature extraction, we tested and compared four CNN architectures, viz., AlexNet, VggNet, GoogLeNet, and ResNet. We applied a feature selection technique, since the feature length is larger than the number of images. Two popular classifiers: SVM and Random Forest, are used and compared. We evaluated the classification performance by cross-validation and used seven performance measures. The best performer is the VggNet-16 feature extractor with the SVM classifier, with accuracy of 86.6% and ROC Area being 0.932 for 5-fold cross validation. We also discuss several misclassified cases and describe future work for performance improvement.
First results of ground-based LWIR hyperspectral imaging remote gas detection
NASA Astrophysics Data System (ADS)
Zheng, Wei-jian; Lei, Zheng-gang; Yu, Chun-chao; Wang, Hai-yang; Fu, Yan-peng; Liao, Ning-fang; Su, Jun-hong
2014-11-01
The new progress of ground-based long-wave infrared remote sensing is presented. The LWIR hyperspectral imaging by using the windowing spatial and temporal modulation Fourier spectroscopy, and the results of outdoor ether gas detection, verify the features of LWIR hyperspectral imaging remote sensing and technical approach. It provides a new technical means for ground-based gas remote sensing.
Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection
NASA Astrophysics Data System (ADS)
Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.
2014-02-01
Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.
Magnetic resonance imaging findings in patients presenting with (sub)acute cerebellar ataxia.
Schneider, Tanja; Thomalla, Götz; Goebell, Einar; Piotrowski, Anna; Yousem, David Mark
2015-06-01
Acute or subacute cerebellar inflammation is mainly caused by postinfectious, toxic, neoplastic, vascular, or idiopathic processes and can result in cerebellar ataxia. Previous magnetic resonance (MR) studies in single patients who developed acute or subacute ataxia showed varying imaging features. Eighteen patients presenting with acute and subacute onset of ataxia were included in this study. Cases of chronic-progressive/hereditary and noncerebellar causes (ischemia, multiple sclerosis lesions, metastasis, bleedings) were excluded. MR imaging findings were then matched with the clinical history of the patient. An underlying etiology for ataxic symptoms were found in 14/18 patients (postinfectious/infectious, paraneoplastic, autoimmune, drug-induced). In two of five patients without MR imaging findings and three of eight patients with minimal imaging features (cerebellar atrophy, slight signal alterations, and small areas of restricted diffusion), adverse clinical outcomes were documented. Of the five patients with prominent MR findings (cerebellar swelling, contrast enhancement, or broad signal abnormalities), two were lost to follow-up and two showed long-term sequelae. No correlation was found between the presence of initial MRI findings in subacute or acute ataxia patients and their long-term clinical outcome. MR imaging was more flagrantly positive in cases due to encephalitis.
NASA Astrophysics Data System (ADS)
Raizada, Rashika; Lee, Anthony M. D.; Liu, Kelly Y.; MacAulay, Calum E.; Ng, Samson; Poh, Catherine F.; Lane, Pierre M.
2017-02-01
Worldwide, there are over 450,000 new cases of oral cancer reported each year. Late-stage diagnosis remains a significant factor responsible for its high mortality rate (>50%). In-vivo non-invasive rapid imaging techniques, that can visualise clinically significant changes in the oral mucosa, may improve the management of oral cancer. We present an analysis of features extracted from oral images obtained using our hand- held wide-field Optical Coherence Tomography (OCT) instrument. The images were analyzed for epithelial scattering, overall tissue scattering, and 3D basement membrane topology. The associations between these three features and disease state (benign, pre-cancer, or cancer), as measured by clinical assessment or pathology, were determined. While scattering coefficient has previously been shown to be sensitive to cancer and dysplasia, likely due to changes in nuclear and cellular density, the addition of basement membrane topology may increase diagnostic ability- as it is known that the presence of bulbous rete pegs in the basement membrane are characteristic of dysplasia. The resolution and field-of-view of our oral OCT system allowed analysis of these features over large areas of up to 2.5mm x 90mm, in a timely fashion, which allow for application in clinical settings.
Pre-trained convolutional neural networks as feature extractors for tuberculosis detection.
Lopes, U K; Valiati, J F
2017-10-01
It is estimated that in 2015, approximately 1.8 million people infected by tuberculosis died, most of them in developing countries. Many of those deaths could have been prevented if the disease had been detected at an earlier stage, but the most advanced diagnosis methods are still cost prohibitive for mass adoption. One of the most popular tuberculosis diagnosis methods is the analysis of frontal thoracic radiographs; however, the impact of this method is diminished by the need for individual analysis of each radiography by properly trained radiologists. Significant research can be found on automating diagnosis by applying computational techniques to medical images, thereby eliminating the need for individual image analysis and greatly diminishing overall costs. In addition, recent improvements on deep learning accomplished excellent results classifying images on diverse domains, but its application for tuberculosis diagnosis remains limited. Thus, the focus of this work is to produce an investigation that will advance the research in the area, presenting three proposals to the application of pre-trained convolutional neural networks as feature extractors to detect the disease. The proposals presented in this work are implemented and compared to the current literature. The obtained results are competitive with published works demonstrating the potential of pre-trained convolutional networks as medical image feature extractors. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
2002-01-01
[figure removed for brevity, see original site] (Released 5 July 2002) This is an image of a crater within part of Amazonis Planitia, located at 22.9N, 152.5W. This image features a number of common features exhibited by Martian craters. The crater is sufficiently large to exhibit a central peak that is seen in the upper right hand corner if the image. Also apparent is the slump blocks on the inside of the crater walls. When the crater was first formed, the crater walls were unstable and subsequently formed a series of landslides over time that formed the hummocky terrain just inside the present crater wall. While these cratering features are common to craters formed on other planetary bodies, such as the moon, the ejecta blanket surrounding the crater displays a morphology that is more unique to Mars. The lobate morphology implies that the ejecta blanket was emplaced in an almost fluid fashion rather than the traditional ballistic ejecta emplacement. This crater morphology occurs on Mars where water ice is suspected to be present just beneath the surface. The impact that created the crater would have enough energy to melt large amounts of water that could form the mud or debris flows that characterize the ejecta morphology that is seen in this image.
A general prediction model for the detection of ADHD and Autism using structural and functional MRI.
Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G
2018-01-01
This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.
NASA Astrophysics Data System (ADS)
Chiarot, C. B.; Siewerdsen, J. H.; Haycocks, T.; Moseley, D. J.; Jaffray, D. A.
2005-11-01
Development, characterization, and quality assurance of advanced x-ray imaging technologies require phantoms that are quantitative and well suited to such modalities. This note reports on the design, construction, and use of an innovative phantom developed for advanced imaging technologies (e.g., multi-detector CT and the numerous applications of flat-panel detectors in dual-energy imaging, tomosynthesis, and cone-beam CT) in diagnostic and image-guided procedures. The design addresses shortcomings of existing phantoms by incorporating criteria satisfied by no other single phantom: (1) inserts are fully 3D—spherically symmetric rather than cylindrical; (2) modules are quantitative, presenting objects of known size and contrast for quality assurance and image quality investigation; (3) features are incorporated in ideal and semi-realistic (anthropomorphic) contexts; and (4) the phantom allows devices to be inserted and manipulated in an accessible module (right lung). The phantom consists of five primary modules: (1) head, featuring contrast-detail spheres approximate to brain lesions; (2) left lung, featuring contrast-detail spheres approximate to lung modules; (3) right lung, an accessible hull in which devices may be placed and manipulated; (4) liver, featuring conrast-detail spheres approximate to metastases; and (5) abdomen/pelvis, featuring simulated kidneys, colon, rectum, bladder, and prostate. The phantom represents a two-fold evolution in design philosophy—from 2D (cylindrically symmetric) to fully 3D, and from exclusively qualitative or quantitative to a design accommodating quantitative study within an anatomical context. It has proven a valuable tool in investigations throughout our institution, including low-dose CT, dual-energy radiography, and cone-beam CT for image-guided radiation therapy and surgery.
Point- and line-based transformation models for high resolution satellite image rectification
NASA Astrophysics Data System (ADS)
Abd Elrahman, Ahmed Mohamed Shaker
Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)
Automated choroid segmentation of three-dimensional SD-OCT images by incorporating EDI-OCT images.
Chen, Qiang; Niu, Sijie; Fang, Wangyi; Shuai, Yuanlu; Fan, Wen; Yuan, Songtao; Liu, Qinghuai
2018-05-01
The measurement of choroidal volume is more related with eye diseases than choroidal thickness, because the choroidal volume can reflect the diseases comprehensively. The purpose is to automatically segment choroid for three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. We present a novel choroid segmentation strategy for SD-OCT images by incorporating the enhanced depth imaging OCT (EDI-OCT) images. The down boundary of the choroid, namely choroid-sclera junction (CSJ), is almost invisible in SD-OCT images, while visible in EDI-OCT images. During the SD-OCT imaging, the EDI-OCT images can be generated for the same eye. Thus, we present an EDI-OCT-driven choroid segmentation method for SD-OCT images, where the choroid segmentation results of the EDI-OCT images are used to estimate the average choroidal thickness and to improve the construction of the CSJ feature space of the SD-OCT images. We also present a whole registration method between EDI-OCT and SD-OCT images based on retinal thickness and Bruch's Membrane (BM) position. The CSJ surface is obtained with a 3D graph search in the CSJ feature space. Experimental results with 768 images (6 cubes, 128 B-scan images for each cube) from 2 healthy persons, 2 age-related macular degeneration (AMD) and 2 diabetic retinopathy (DR) patients, and 210 B-scan images from other 8 healthy persons and 21 patients demonstrate that our method can achieve high segmentation accuracy. The mean choroid volume difference and overlap ratio for 6 cubes between our proposed method and outlines drawn by experts were -1.96µm3 and 88.56%, respectively. Our method is effective for the 3D choroid segmentation of SD-OCT images because the segmentation accuracy and stability are compared with the manual segmentation. Copyright © 2017. Published by Elsevier B.V.
Atypically presenting kaposiform hemangioendothelioma of the knee: ultrasound findings.
Erdem Toslak, Iclal; Stegman, Matthew; Reiter, Michael P; Barkan, Güliz A; Borys, Dariusz; Lim-Dunham, Jennifer E
2018-04-10
Kaposiform hemangioendothelioma (KHE) is a rare vascular tumor of early childhood and infancy. Kasabach-Merritt phenomenon, a common complication of KHE, is characterized by life-threatening thrombocytopenia, hemolytic anemia, and consumption coagulopathy. There may be atypical cases that do not present with Kasabach-Merritt phenomenon and do have atypical imaging findings. Knowledge of atypical imaging features may assist radiologists in identifying KHE. In this report, we present a 4-year-old case of KHE with atypical ultrasound findings.