Method for indexing and retrieving manufacturing-specific digital imagery based on image content
Ferrell, Regina K.; Karnowski, Thomas P.; Tobin, Jr., Kenneth W.
2004-06-15
A method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps. First, at least one feature vector can be extracted from a manufacturing-specific digital image stored in an image database. In particular, each extracted feature vector corresponds to a particular characteristic of the manufacturing-specific digital image, for instance, a digital image modality and overall characteristic, a substrate/background characteristic, and an anomaly/defect characteristic. Notably, the extracting step includes generating a defect mask using a detection process. Second, using an unsupervised clustering method, each extracted feature vector can be indexed in a hierarchical search tree. Third, a manufacturing-specific digital image associated with a feature vector stored in the hierarchicial search tree can be retrieved, wherein the manufacturing-specific digital image has image content comparably related to the image content of the query image. More particularly, can include two data reductions, the first performed based upon a query vector extracted from a query image. Subsequently, a user can select relevant images resulting from the first data reduction. From the selection, a prototype vector can be calculated, from which a second-level data reduction can be performed. The second-level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to the query vector. An additional fourth step can include managing the hierarchical search tree by substituting a vector average for several redundant feature vectors encapsulated by nodes in the hierarchical search tree.
WND-CHARM: Multi-purpose image classification using compound image transforms
Orlov, Nikita; Shamir, Lior; Macura, Tomasz; Johnston, Josiah; Eckley, D. Mark; Goldberg, Ilya G.
2008-01-01
We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier’s high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org. PMID:18958301
Histological Image Feature Mining Reveals Emergent Diagnostic Properties for Renal Cancer
Kothari, Sonal; Phan, John H.; Young, Andrew N.; Wang, May D.
2016-01-01
Computer-aided histological image classification systems are important for making objective and timely cancer diagnostic decisions. These systems use combinations of image features that quantify a variety of image properties. Because researchers tend to validate their diagnostic systems on specific cancer endpoints, it is difficult to predict which image features will perform well given a new cancer endpoint. In this paper, we define a comprehensive set of common image features (consisting of 12 distinct feature subsets) that quantify a variety of image properties. We use a data-mining approach to determine which feature subsets and image properties emerge as part of an “optimal” diagnostic model when applied to specific cancer endpoints. Our goal is to assess the performance of such comprehensive image feature sets for application to a wide variety of diagnostic problems. We perform this study on 12 endpoints including 6 renal tumor subtype endpoints and 6 renal cancer grade endpoints. Keywords-histology, image mining, computer-aided diagnosis PMID:28163980
Chen, Qinghua; Raghavan, Prashant; Mukherjee, Sugoto; Jameson, Mark J; Patrie, James; Xin, Wenjun; Xian, Junfang; Wang, Zhenchang; Levine, Paul A; Wintermark, Max
2015-10-01
The aim of this study was to systematically compare a comprehensive array of magnetic resonance (MR) imaging features in terms of their sensitivity and specificity to diagnose cervical lymph node metastases in patients with thyroid cancer. The study included 41 patients with thyroid malignancy who underwent surgical excision of cervical lymph nodes and had preoperative MR imaging ≤4weeks prior to surgery. Three head and neck neuroradiologists independently evaluated all the MR images. Using the pathology results as reference, the sensitivity, specificity and interobserver agreement of each MR imaging characteristic were calculated. On multivariate analysis, no single imaging feature was significantly correlated with metastasis. In general, imaging features demonstrated high specificity, but poor sensitivity and moderate interobserver agreement at best. Commonly used MR imaging features have limited sensitivity at correctly identifying cervical lymph node metastases in patients with thyroid cancer. A negative neck MR scan should not dissuade a surgeon from performing a neck dissection in patients with thyroid carcinomas.
Hiasat, Jamila G; Saleh, Alaa; Al-Hussaini, Maysa; Al Nawaiseh, Ibrahim; Mehyar, Mustafa; Qandeel, Monther; Mohammad, Mona; Deebajah, Rasha; Sultan, Iyad; Jaradat, Imad; Mansour, Asem; Yousef, Yacoub A
2018-06-01
To evaluate the predictive value of magnetic resonance imaging in retinoblastoma for the likelihood of high-risk pathologic features. A retrospective study of 64 eyes enucleated from 60 retinoblastoma patients. Contrast-enhanced magnetic resonance imaging was performed before enucleation. Main outcome measures included demographics, laterality, accuracy, sensitivity, and specificity of magnetic resonance imaging in detecting high-risk pathologic features. Optic nerve invasion and choroidal invasion were seen microscopically in 34 (53%) and 28 (44%) eyes, respectively, while they were detected in magnetic resonance imaging in 22 (34%) and 15 (23%) eyes, respectively. The accuracy of magnetic resonance imaging in detecting prelaminar invasion was 77% (sensitivity 89%, specificity 98%), 56% for laminar invasion (sensitivity 27%, specificity 94%), 84% for postlaminar invasion (sensitivity 42%, specificity 98%), and 100% for optic cut edge invasion (sensitivity100%, specificity 100%). The accuracy of magnetic resonance imaging in detecting focal choroidal invasion was 48% (sensitivity 33%, specificity 97%), and 84% for massive choroidal invasion (sensitivity 53%, specificity 98%), and the accuracy in detecting extrascleral extension was 96% (sensitivity 67%, specificity 98%). Magnetic resonance imaging should not be the only method to stratify patients at high risk from those who are not, eventhough it can predict with high accuracy extensive postlaminar optic nerve invasion, massive choroidal invasion, and extrascleral tumor extension.
Image counter-forensics based on feature injection
NASA Astrophysics Data System (ADS)
Iuliani, M.; Rossetto, S.; Bianchi, T.; De Rosa, Alessia; Piva, A.; Barni, M.
2014-02-01
Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image ~x, perceptually similar to x, whose feature f(~x) is as close as possible to f(y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Φ(z) =│ f(z) - f(y)│ through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods.
Recognizing ovarian cancer from co-registered ultrasound and photoacoustic images
NASA Astrophysics Data System (ADS)
Alqasemi, Umar; Kumavor, Patrick; Aguirre, Andres; Zhu, Quing
2013-03-01
Unique features in co-registered ultrasound and photoacoustic images of ex vivo ovarian tissue are introduced, along with the hypotheses of how these features may relate to the physiology of tumors. The images are compressed with wavelet transform, after which the mean Radon transform of the photoacoustic image is computed and fitted with a Gaussian function to find the centroid of the suspicious area for shift-invariant recognition process. In the next step, 24 features are extracted from a training set of images by several methods; including features from the Fourier domain, image statistics, and the outputs of different composite filters constructed from the joint frequency response of different cancerous images. The features were chosen from more than 400 training images obtained from 33 ex vivo ovaries of 24 patients, and used to train a support vector machine (SVM) structure. The SVM classifier was able to exclusively separate the cancerous from the non-cancerous cases with 100% sensitivity and specificity. At the end, the classifier was used to test 95 new images, obtained from 37 ovaries of 20 additional patients. The SVM classifier achieved 76.92% sensitivity and 95.12% specificity. Furthermore, if we assume that recognizing one image as a cancerous case is sufficient to consider the ovary as malignant, then the SVM classifier achieves 100% sensitivity and 87.88% specificity.
Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.
Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng
2017-12-01
How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.
Container Surface Evaluation by Function Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.
Gale, Heather I; Sharatz, Steven M; Taphey, Mayureewan; Bradley, William F; Nimkin, Katherine; Gee, Michael S
2017-09-01
Assessment for active Crohn disease by CT enterography and MR enterography relies on identifying mural and perienteric imaging features. To evaluate the performance of established imaging features of active Crohn disease in children and adolescents on CT and MR enterography compared with histological reference. We included patients ages 18 years and younger who underwent either CT or MR enterography from 2007 to 2014 and had endoscopic biopsy within 28 days of imaging. Two pediatric radiologists blinded to the histological results reviewed imaging studies and scored the bowel for the presence or absence of mural features (wall thickening >3 mm, mural hyperenhancement) and perienteric features (mesenteric hypervascularity, edema, fibrofatty proliferation and lymphadenopathy) of active disease. We performed univariate analysis and multivariate logistic regression to compare imaging features with histological reference. We evaluated 452 bowel segments (135 from CT enterography, 317 from MR enterography) from 84 patients. Mural imaging features had the highest association with active inflammation both for MR enterography (wall thickening had 80% accuracy, 69% sensitivity and 91% specificity; mural hyperenhancement had 78%, 53% and 96%, respectively) and CT enterography (wall thickening had 84% accuracy, 72% sensitivity and 91% specificity; mural hyperenhancement had 76%, 51% and 91%, respectively), with perienteric imaging features performing significantly worse on MR enterography relative to CT enterography (P < 0.001). Mural features are predictors of active inflammation for both CT and MR enterography, while perienteric features can be distinguished better on CT enterography compared with MR enterography. This likely reflects the increased conspicuity of the mesentery on CT enterography and suggests that mural features are the most reliable imaging features of active Crohn disease in children and adolescents.
Automated detection of diabetic retinopathy on digital fundus images.
Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D
2002-02-01
The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.
Quality evaluation of no-reference MR images using multidirectional filters and image statistics.
Jang, Jinseong; Bang, Kihun; Jang, Hanbyol; Hwang, Dosik
2018-09-01
This study aimed to develop a fully automatic, no-reference image-quality assessment (IQA) method for MR images. New quality-aware features were obtained by applying multidirectional filters to MR images and examining the feature statistics. A histogram of these features was then fitted to a generalized Gaussian distribution function for which the shape parameters yielded different values depending on the type of distortion in the MR image. Standard feature statistics were established through a training process based on high-quality MR images without distortion. Subsequently, the feature statistics of a test MR image were calculated and compared with the standards. The quality score was calculated as the difference between the shape parameters of the test image and the undistorted standard images. The proposed IQA method showed a >0.99 correlation with the conventional full-reference assessment methods; accordingly, this proposed method yielded the best performance among no-reference IQA methods for images containing six types of synthetic, MR-specific distortions. In addition, for authentically distorted images, the proposed method yielded the highest correlation with subjective assessments by human observers, thus demonstrating its superior performance over other no-reference IQAs. Our proposed IQA was designed to consider MR-specific features and outperformed other no-reference IQAs designed mainly for photographic images. Magn Reson Med 80:914-924, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Alqasemi, Umar; Kumavor, Patrick; Aguirre, Andres; Zhu, Quing
2012-12-01
Unique features and the underlining hypotheses of how these features may relate to the tumor physiology in coregistered ultrasound and photoacoustic images of ex vivo ovarian tissue are introduced. The images were first compressed with wavelet transform. The mean Radon transform of photoacoustic images was then computed and fitted with a Gaussian function to find the centroid of a suspicious area for shift-invariant recognition process. Twenty-four features were extracted from a training set by several methods, including Fourier transform, image statistics, and different composite filters. The features were chosen from more than 400 training images obtained from 33 ex vivo ovaries of 24 patients, and used to train three classifiers, including generalized linear model, neural network, and support vector machine (SVM). The SVM achieved the best training performance and was able to exclusively separate cancerous from non-cancerous cases with 100% sensitivity and specificity. At the end, the classifiers were used to test 95 new images obtained from 37 ovaries of 20 additional patients. The SVM classifier achieved 76.92% sensitivity and 95.12% specificity. Furthermore, if we assume that recognizing one image as a cancer is sufficient to consider an ovary as malignant, the SVM classifier achieves 100% sensitivity and 87.88% specificity.
NASA Astrophysics Data System (ADS)
Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas
1996-04-01
The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.
Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit
2015-08-01
In this work various approaches are investigated for X-ray image retrieval and specifically chest pathology retrieval. Given a query image taken from a data set of 443 images, the objective is to rank images according to similarity. Different features, including binary features, texture features, and deep learning (CNN) features are examined. In addition, two approaches are investigated for the retrieval task. One approach is based on the distance of image descriptors using the above features (hereon termed the "descriptor"-based approach); the second approach ("classification"-based approach) is based on a probability descriptor, generated by a pair-wise classification of each two classes (pathologies) and their decision values using an SVM classifier. Best results are achieved using deep learning features in a classification scheme.
Quantification of photoacoustic microscopy images for ovarian cancer detection
NASA Astrophysics Data System (ADS)
Wang, Tianheng; Yang, Yi; Alqasemi, Umar; Kumavor, Patrick D.; Wang, Xiaohong; Sanders, Melinda; Brewer, Molly; Zhu, Quing
2014-03-01
In this paper, human ovarian tissues with malignant and benign features were imaged ex vivo by using an opticalresolution photoacoustic microscopy (OR-PAM) system. Several features were quantitatively extracted from PAM images to describe photoacoustic signal distributions and fluctuations. 106 PAM images from 18 human ovaries were classified by applying those extracted features to a logistic prediction model. 57 images from 9 ovaries were used as a training set to train the logistic model, and 49 images from another 9 ovaries were used to test our prediction model. We assumed that if one image from one malignant ovary was classified as malignant, it is sufficient to classify this ovary as malignant. For the training set, we achieved 100% sensitivity and 83.3% specificity; for testing set, we achieved 100% sensitivity and 66.7% specificity. These preliminary results demonstrate that PAM could be extremely valuable in assisting and guiding surgeons for in vivo evaluation of ovarian tissue.
NASA Astrophysics Data System (ADS)
Arimura, Hidetaka; Yoshiura, Takashi; Kumazawa, Seiji; Tanaka, Kazuhiro; Koga, Hiroshi; Mihara, Futoshi; Honda, Hiroshi; Sakai, Shuji; Toyofuku, Fukai; Higashida, Yoshiharu
2008-03-01
Our goal for this study was to attempt to develop a computer-aided diagnostic (CAD) method for classification of Alzheimer's disease (AD) with atrophic image features derived from specific anatomical regions in three-dimensional (3-D) T1-weighted magnetic resonance (MR) images. Specific regions related to the cerebral atrophy of AD were white matter and gray matter regions, and CSF regions in this study. Cerebral cortical gray matter regions were determined by extracting a brain and white matter regions based on a level set based method, whose speed function depended on gradient vectors in an original image and pixel values in grown regions. The CSF regions in cerebral sulci and lateral ventricles were extracted by wrapping the brain tightly with a zero level set determined from a level set function. Volumes of the specific regions and the cortical thickness were determined as atrophic image features. Average cortical thickness was calculated in 32 subregions, which were obtained by dividing each brain region. Finally, AD patients were classified by using a support vector machine, which was trained by the image features of AD and non-AD cases. We applied our CAD method to MR images of whole brains obtained from 29 clinically diagnosed AD cases and 25 non-AD cases. As a result, the area under a receiver operating characteristic (ROC) curve obtained by our computerized method was 0.901 based on a leave-one-out test in identification of AD cases among 54 cases including 8 AD patients at early stages. The accuracy for discrimination between 29 AD patients and 25 non-AD subjects was 0.840, which was determined at the point where the sensitivity was the same as the specificity on the ROC curve. This result showed that our CAD method based on atrophic image features may be promising for detecting AD patients by using 3-D MR images.
Histopathological Image Classification using Discriminative Feature-oriented Dictionary Learning
Vu, Tiep Huu; Mousavi, Hojjat Seyed; Monga, Vishal; Rao, Ganesh; Rao, UK Arvind
2016-01-01
In histopathological image analysis, feature extraction for classification is a challenging task due to the diversity of histology features suitable for each problem as well as presence of rich geometrical structures. In this paper, we propose an automatic feature discovery framework via learning class-specific dictionaries and present a low-complexity method for classification and disease grading in histopathology. Essentially, our Discriminative Feature-oriented Dictionary Learning (DFDL) method learns class-specific dictionaries such that under a sparsity constraint, the learned dictionaries allow representing a new image sample parsimoniously via the dictionary corresponding to the class identity of the sample. At the same time, the dictionary is designed to be poorly capable of representing samples from other classes. Experiments on three challenging real-world image databases: 1) histopathological images of intraductal breast lesions, 2) mammalian kidney, lung and spleen images provided by the Animal Diagnostics Lab (ADL) at Pennsylvania State University, and 3) brain tumor images from The Cancer Genome Atlas (TCGA) database, reveal the merits of our proposal over state-of-the-art alternatives. Moreover, we demonstrate that DFDL exhibits a more graceful decay in classification accuracy against the number of training images which is highly desirable in practice where generous training is often not available. PMID:26513781
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, R; Aguilera, T; Shultz, D
2014-06-15
Purpose: This study aims to develop predictive models of patient outcome by extracting advanced imaging features (i.e., Radiomics) from FDG-PET images. Methods: We acquired pre-treatment PET scans for 51 stage I NSCLC patients treated with SABR. We calculated 139 quantitative features from each patient PET image, including 5 morphological features, 8 statistical features, 27 texture features, and 100 features from the intensity-volume histogram. Based on the imaging features, we aim to distinguish between 2 risk groups of patients: those with regional failure or distant metastasis versus those without. We investigated 3 pattern classification algorithms: linear discriminant analysis (LDA), naive Bayesmore » (NB), and logistic regression (LR). To avoid the curse of dimensionality, we performed feature selection by first removing redundant features and then applying sequential forward selection using the wrapper approach. To evaluate the predictive performance, we performed 10-fold cross validation with 1000 random splits of the data and calculated the area under the ROC curve (AUC). Results: Feature selection identified 2 texture features (homogeneity and/or wavelet decompositions) for NB and LR, while for LDA SUVmax and one texture feature (correlation) were identified. All 3 classifiers achieved statistically significant improvements over conventional PET imaging metrics such as tumor volume (AUC = 0.668) and SUVmax (AUC = 0.737). Overall, NB achieved the best predictive performance (AUC = 0.806). This also compares favorably with MTV using the best threshold at an SUV of 11.6 (AUC = 0.746). At a sensitivity of 80%, NB achieved 69% specificity, while SUVmax and tumor volume only had 36% and 47% specificity. Conclusion: Through a systematic analysis of advanced PET imaging features, we are able to build models with improved predictive value over conventional imaging metrics. If validated in a large independent cohort, the proposed techniques could potentially aid in identifying patients who might benefit from adjuvant therapy.« less
SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY.
Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang
2009-08-07
This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application.
SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY
Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang
2010-01-01
This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application. PMID:21197416
NASA Astrophysics Data System (ADS)
Montejo, Ludguier D.; Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.
2013-03-01
We apply the Fourier Transform to absorption and scattering coefficient images of proximal interphalangeal (PIP) joints and evaluate the performance of these coefficients as classifiers using receiver operator characteristic (ROC) curve analysis. We find 25 features that yield a Youden index over 0.7, 3 features that yield a Youden index over 0.8, and 1 feature that yields a Youden index over 0.9 (90.0% sensitivity and 100% specificity). In general, scattering coefficient images yield better one-dimensional classifiers compared to absorption coefficient images. Using features derived from scattering coefficient images we obtain an average Youden index of 0.58 +/- 0.16, and an average Youden index of 0.45 +/- 0.15 when using features from absorption coefficient images.
Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.
2015-01-01
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069
Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang
2016-07-01
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.
Guha Mazumder, Arpan; Chatterjee, Swarnadip; Chatterjee, Saunak; Gonzalez, Juan Jose; Bag, Swarnendu; Ghosh, Sambuddha; Mukherjee, Anirban; Chatterjee, Jyotirmoy
2017-01-01
Introduction Image-based early detection for diabetic retinopathy (DR) needs value addition due to lack of well-defined disease-specific quantitative imaging biomarkers (QIBs) for neuroretinal degeneration and spectropathological information at the systemic level. Retinal neurodegeneration is an early event in the pathogenesis of DR. Therefore, development of an integrated assessment method for detecting neuroretinal degeneration using spectropathology and QIBs is necessary for the early diagnosis of DR. Methods The present work explored the efficacy of intensity and textural features extracted from optical coherence tomography (OCT) images after selecting a specific subset of features for the precise classification of retinal layers using variants of support vector machine (SVM). Fourier transform infrared (FTIR) spectroscopy and nuclear magnetic resonance (NMR) spectroscopy were also performed to confirm the spectropathological attributes of serum for further value addition to the OCT, fundoscopy, and fluorescein angiography (FA) findings. The serum metabolomic findings were also incorporated for characterizing retinal layer thickness alterations and vascular asymmetries. Results Results suggested that OCT features could differentiate the retinal lesions indicating retinal neurodegeneration with high sensitivity and specificity. OCT, fundoscopy, and FA provided geometrical as well as optical features. NMR revealed elevated levels of ribitol, glycerophosphocholine, and uridine diphosphate N-acetyl glucosamine, while the FTIR of serum samples confirmed the higher expressions of lipids and β-sheet-containing proteins responsible for neoangiogenesis, vascular fragility, vascular asymmetry, and subsequent neuroretinal degeneration in DR. Conclusion Our data indicated that disease-specific spectropathological alterations could be the major phenomena behind the vascular attenuations observed through fundoscopy and FA, as well as the variations in the intensity and textural features observed in OCT images. Finally, we propose a model that uses spectropathology corroborated with specific QIBs for detecting neuroretinal degeneration in early diagnosis of DR. PMID:29200821
Gross, G W
1992-10-01
The highlight of recent articles published on pediatric chest imaging is the potential advantage of digital imaging of the infant's chest. Digital chest imaging allows accurate determination of functional residual capacity as well as manipulation of the image to highlight specific anatomic features. Reusable photostimulable phosphor imaging systems provide wide imaging latitude and lower patient dose. In addition, digital radiology permits multiple remote-site viewing on monitor displays. Several excellent reviews of the imaging features of various thoracic abnormalities and the application of newer imaging modalities, such as ultrafast CT and MR imaging to the pediatric chest, are additional highlights.
Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network.
Chi, Jianning; Walia, Ekta; Babyn, Paul; Wang, Jimmy; Groot, Gary; Eramian, Mark
2017-08-01
With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.
SU-F-R-35: Repeatability of Texture Features in T1- and T2-Weighted MR Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahon, R; Weiss, E; Karki, K
Purpose: To evaluate repeatability of lung tumor texture features from inspiration/expiration MR image pairs for potential use in patient specific care models and applications. Repeatability is a desirable and necessary characteristic of features included in such models. Methods: T1-weighted Volumetric Interpolation Breath-Hold Examination (VIBE) and/or T2-weighted MRI scans were acquired for 15 patients with non-small cell lung cancer before and during radiotherapy for a total of 32 and 34 same session inspiration-expiration breath-hold image pairs respectively. Bias correction was applied to the VIBE (VIBE-BC) and T2-weighted (T2-BC) images. Fifty-nine texture features at five wavelet decomposition ratios were extracted from themore » delineated primary tumor including: histogram(HIST), gray level co-occurrence matrix(GLCM), gray level run length matrix(GLRLM), gray level size zone matrix(GLSZM), and neighborhood gray tone different matrix (NGTDM) based features. Repeatability of the texture features for VIBE, VIBE-BC, T2-weighted, and T2-BC image pairs was evaluated by the concordance correlation coefficient (CCC) between corresponding image pairs, with a value greater than 0.90 indicating repeatability. Results: For the VIBE image pairs, the percentage of repeatable texture features by wavelet ratio was between 20% and 24% of the 59 extracted features; the T2-weighted image pairs exhibited repeatability in the range of 44–49%. The percentage dropped to 10–20% for the VIBE-BC images, and 12–14% for the T2-BC images. In addition, five texture features were found to be repeatable in all four image sets including two GLRLM, two GLZSM, and one NGTDN features. No single texture feature category was repeatable among all three image types; however, certain categories performed more consistently on a per image type basis. Conclusion: We identified repeatable texture features on T1- and T2-weighted MRI scans. These texture features should be further investigated for use in specific applications such as tissue classification and changes during radiation therapy utilizing a standard imaging protocol. Authors have the following disclosures: a research agreement with Philips Medical systems (Hugo, Weiss), a license agreement with Varian Medical Systems (Hugo, Weiss), research grants from the National Institute of Health (Hugo, Weiss), UpToDate royalties (Weiss), and none(Mahon, Ford, Karki). Authors have no potential conflicts of interest to disclose.« less
Information based universal feature extraction
NASA Astrophysics Data System (ADS)
Amiri, Mohammad; Brause, Rüdiger
2015-02-01
In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.
Wen, Zaidao; Hou, Zaidao; Jiao, Licheng
2017-11-01
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
Yoo, Youngjin; Tang, Lisa Y W; Brosch, Tom; Li, David K B; Kolind, Shannon; Vavasour, Irene; Rauscher, Alexander; MacKay, Alex L; Traboulsee, Anthony; Tam, Roger C
2018-01-01
Myelin imaging is a form of quantitative magnetic resonance imaging (MRI) that measures myelin content and can potentially allow demyelinating diseases such as multiple sclerosis (MS) to be detected earlier. Although focal lesions are the most visible signs of MS pathology on conventional MRI, it has been shown that even tissues that appear normal may exhibit decreased myelin content as revealed by myelin-specific images (i.e., myelin maps). Current methods for analyzing myelin maps typically use global or regional mean myelin measurements to detect abnormalities, but ignore finer spatial patterns that may be characteristic of MS. In this paper, we present a machine learning method to automatically learn, from multimodal MR images, latent spatial features that can potentially improve the detection of MS pathology at early stage. More specifically, 3D image patches are extracted from myelin maps and the corresponding T1-weighted (T1w) MRIs, and are used to learn a latent joint myelin-T1w feature representation via unsupervised deep learning. Using a data set of images from MS patients and healthy controls, a common set of patches are selected via a voxel-wise t -test performed between the two groups. In each MS image, any patches overlapping with focal lesions are excluded, and a feature imputation method is used to fill in the missing values. A feature selection process (LASSO) is then utilized to construct a sparse representation. The resulting normal-appearing features are used to train a random forest classifier. Using the myelin and T1w images of 55 relapse-remitting MS patients and 44 healthy controls in an 11-fold cross-validation experiment, the proposed method achieved an average classification accuracy of 87.9% (SD = 8.4%), which is higher and more consistent across folds than those attained by regional mean myelin (73.7%, SD = 13.7%) and T1w measurements (66.7%, SD = 10.6%), or deep-learned features in either the myelin (83.8%, SD = 11.0%) or T1w (70.1%, SD = 13.6%) images alone, suggesting that the proposed method has strong potential for identifying image features that are more sensitive and specific to MS pathology in normal-appearing brain tissues.
Kong, Jun; Wang, Fusheng; Teodoro, George; Cooper, Lee; Moreno, Carlos S; Kurc, Tahsin; Pan, Tony; Saltz, Joel; Brat, Daniel
2013-12-01
In this paper, we present a novel framework for microscopic image analysis of nuclei, data management, and high performance computation to support translational research involving nuclear morphometry features, molecular data, and clinical outcomes. Our image analysis pipeline consists of nuclei segmentation and feature computation facilitated by high performance computing with coordinated execution in multi-core CPUs and Graphical Processor Units (GPUs). All data derived from image analysis are managed in a spatial relational database supporting highly efficient scientific queries. We applied our image analysis workflow to 159 glioblastomas (GBM) from The Cancer Genome Atlas dataset. With integrative studies, we found statistics of four specific nuclear features were significantly associated with patient survival. Additionally, we correlated nuclear features with molecular data and found interesting results that support pathologic domain knowledge. We found that Proneural subtype GBMs had the smallest mean of nuclear Eccentricity and the largest mean of nuclear Extent, and MinorAxisLength. We also found gene expressions of stem cell marker MYC and cell proliferation maker MKI67 were correlated with nuclear features. To complement and inform pathologists of relevant diagnostic features, we queried the most representative nuclear instances from each patient population based on genetic and transcriptional classes. Our results demonstrate that specific nuclear features carry prognostic significance and associations with transcriptional and genetic classes, highlighting the potential of high throughput pathology image analysis as a complementary approach to human-based review and translational research.
High resolution satellite image indexing and retrieval using SURF features and bag of visual words
NASA Astrophysics Data System (ADS)
Bouteldja, Samia; Kourgli, Assia
2017-03-01
In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.
Automated simultaneous multiple feature classification of MTI data
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.
2002-08-01
Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.
Thekkek, Nadhi; Lee, Michelle H.; Polydorides, Alexandros D.; Rosen, Daniel G.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2015-01-01
Abstract. Current imaging tools are associated with inconsistent sensitivity and specificity for detection of Barrett’s-associated neoplasia. Optical imaging has shown promise in improving the classification of neoplasia in vivo. The goal of this pilot study was to evaluate whether in vivo vital dye fluorescence imaging (VFI) has the potential to improve the accuracy of early-detection of Barrett’s-associated neoplasia. In vivo endoscopic VFI images were collected from 65 sites in 14 patients with confirmed Barrett’s esophagus (BE), dysplasia, or esophageal adenocarcinoma using a modular video endoscope and a high-resolution microendoscope (HRME). Qualitative image features were compared to histology; VFI and HRME images show changes in glandular structure associated with neoplastic progression. Quantitative image features in VFI images were identified for objective image classification of metaplasia and neoplasia, and a diagnostic algorithm was developed using leave-one-out cross validation. Three image features extracted from VFI images were used to classify tissue as neoplastic or not with a sensitivity of 87.8% and a specificity of 77.6% (AUC=0.878). A multimodal approach incorporating VFI and HRME imaging can delineate epithelial changes present in Barrett’s-associated neoplasia. Quantitative analysis of VFI images may provide a means for objective interpretation of BE during surveillance. PMID:25950645
Thekkek, Nadhi; Lee, Michelle H; Polydorides, Alexandros D; Rosen, Daniel G; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2015-05-01
Current imaging tools are associated with inconsistent sensitivity and specificity for detection of Barrett's-associated neoplasia. Optical imaging has shown promise in improving the classification of neoplasia in vivo. The goal of this pilot study was to evaluate whether in vivo vital dye fluorescence imaging (VFI) has the potential to improve the accuracy of early-detection of Barrett's-associated neoplasia. In vivo endoscopic VFI images were collected from 65 sites in 14 patients with confirmed Barrett's esophagus (BE), dysplasia, oresophageal adenocarcinoma using a modular video endoscope and a high-resolution microendoscope(HRME). Qualitative image features were compared to histology; VFI and HRME images show changes in glandular structure associated with neoplastic progression. Quantitative image features in VFI images were identified for objective image classification of metaplasia and neoplasia, and a diagnostic algorithm was developed using leave-one-out cross validation. Three image features extracted from VFI images were used to classify tissue as neoplastic or not with a sensitivity of 87.8% and a specificity of 77.6% (AUC = 0.878). A multimodal approach incorporating VFI and HRME imaging can delineate epithelial changes present in Barrett's-associated neoplasia. Quantitative analysis of VFI images may provide a means for objective interpretation of BE during surveillance.
High-order distance-based multiview stochastic learning in image classification.
Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng
2014-12-01
How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.
Jiang, Jun; Wu, Yao; Huang, Meiyan; Yang, Wei; Chen, Wufan; Feng, Qianjin
2013-01-01
Brain tumor segmentation is a clinical requirement for brain tumor diagnosis and radiotherapy planning. Automating this process is a challenging task due to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this paper, we propose a method to construct a graph by learning the population- and patient-specific feature sets of multimodal magnetic resonance (MR) images and by utilizing the graph-cut to achieve a final segmentation. The probabilities of each pixel that belongs to the foreground (tumor) and the background are estimated by global and custom classifiers that are trained through learning population- and patient-specific feature sets, respectively. The proposed method is evaluated using 23 glioma image sequences, and the segmentation results are compared with other approaches. The encouraging evaluation results obtained, i.e., DSC (84.5%), Jaccard (74.1%), sensitivity (87.2%), and specificity (83.1%), show that the proposed method can effectively make use of both population- and patient-specific information. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Wang, Hongkai; Zhou, Zongwei; Li, Yingci; Chen, Zhonghua; Lu, Peiou; Wang, Wenzhi; Liu, Wanyu; Yu, Lijuan
2017-12-01
This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18 F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Features and limitations of mobile tablet devices for viewing radiological images.
Grunert, J H
2015-03-01
Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.
Shift-invariant discrete wavelet transform analysis for retinal image classification.
Khademi, April; Krishnan, Sridhar
2007-12-01
This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification.
Wang, Shouyi; Bowen, Stephen R; Chaovalitwongse, W Art; Sandison, George A; Grabowski, Thomas J; Kinahan, Paul E
2014-02-21
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUV(peak)) over lesions of interest. Relative differences in SUV(peak) between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUV(peak) values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification
NASA Astrophysics Data System (ADS)
Wang, Shouyi; Bowen, Stephen R.; Chaovalitwongse, W. Art; Sandison, George A.; Grabowski, Thomas J.; Kinahan, Paul E.
2014-02-01
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUVpeak) over lesions of interest. Relative differences in SUVpeak between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUVpeak values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
Evaluation of Imaging Methods in Tick-Borne Encephalitis.
Zawadzki, Radosław; Garkowski, Adam; Kubas, Bożena; Zajkowska, Joanna; Hładuński, Marcin; Jurgilewicz, Dorota; Łebkowska, Urszula
2017-01-01
Tick-borne encephalitis (TBE) is caused by a virus that belongs to the Flaviviridae family and is transmitted by tick bites. The disease has a biphasic course. Diagnosis is based on laboratory examinations because of non-specific clinical features, which usually entails the detection of specific IgM antibodies in either blood or cerebrospinal fluid that appear in the second phase of the disease. Neurological symptoms, time course of the disease, and imaging findings are multifaceted. During the second phase of the disease, after the onset of neurological symptoms, magnetic resonance imaging (MRI) abnormalities are observed in a limited number of cases. However, imaging features may aid in predicting the prognosis of the disease.
Shi, Y; Qi, F; Xue, Z; Chen, L; Ito, K; Matsuo, H; Shen, D
2008-04-01
This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.
Quantitative imaging features: extension of the oncology medical image database
NASA Astrophysics Data System (ADS)
Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.
2015-03-01
Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.
Insights into multimodal imaging classification of ADHD
Colby, John B.; Rudie, Jeffrey D.; Brown, Jesse A.; Douglas, Pamela K.; Cohen, Mark S.; Shehzad, Zarrar
2012-01-01
Attention deficit hyperactivity disorder (ADHD) currently is diagnosed in children by clinicians via subjective ADHD-specific behavioral instruments and by reports from the parents and teachers. Considering its high prevalence and large economic and societal costs, a quantitative tool that aids in diagnosis by characterizing underlying neurobiology would be extremely valuable. This provided motivation for the ADHD-200 machine learning (ML) competition, a multisite collaborative effort to investigate imaging classifiers for ADHD. Here we present our ML approach, which used structural and functional magnetic resonance imaging data, combined with demographic information, to predict diagnostic status of individuals with ADHD from typically developing (TD) children across eight different research sites. Structural features included quantitative metrics from 113 cortical and non-cortical regions. Functional features included Pearson correlation functional connectivity matrices, nodal and global graph theoretical measures, nodal power spectra, voxelwise global connectivity, and voxelwise regional homogeneity. We performed feature ranking for each site and modality using the multiple support vector machine recursive feature elimination (SVM-RFE) algorithm, and feature subset selection by optimizing the expected generalization performance of a radial basis function kernel SVM (RBF-SVM) trained across a range of the top features. Site-specific RBF-SVMs using these optimal feature sets from each imaging modality were used to predict the class labels of an independent hold-out test set. A voting approach was used to combine these multiple predictions and assign final class labels. With this methodology we were able to predict diagnosis of ADHD with 55% accuracy (versus a 39% chance level in this sample), 33% sensitivity, and 80% specificity. This approach also allowed us to evaluate predictive structural and functional features giving insight into abnormal brain circuitry in ADHD. PMID:22912605
Das, D K; Maiti, A K; Chakraborty, C
2015-03-01
In this paper, we propose a comprehensive image characterization cum classification framework for malaria-infected stage detection using microscopic images of thin blood smears. The methodology mainly includes microscopic imaging of Leishman stained blood slides, noise reduction and illumination correction, erythrocyte segmentation, feature selection followed by machine classification. Amongst three-image segmentation algorithms (namely, rule-based, Chan-Vese-based and marker-controlled watershed methods), marker-controlled watershed technique provides better boundary detection of erythrocytes specially in overlapping situations. Microscopic features at intensity, texture and morphology levels are extracted to discriminate infected and noninfected erythrocytes. In order to achieve subgroup of potential features, feature selection techniques, namely, F-statistic and information gain criteria are considered here for ranking. Finally, five different classifiers, namely, Naive Bayes, multilayer perceptron neural network, logistic regression, classification and regression tree (CART), RBF neural network have been trained and tested by 888 erythrocytes (infected and noninfected) for each features' subset. Performance evaluation of the proposed methodology shows that multilayer perceptron network provides higher accuracy for malaria-infected erythrocytes recognition and infected stage classification. Results show that top 90 features ranked by F-statistic (specificity: 98.64%, sensitivity: 100%, PPV: 99.73% and overall accuracy: 96.84%) and top 60 features ranked by information gain provides better results (specificity: 97.29%, sensitivity: 100%, PPV: 99.46% and overall accuracy: 96.73%) for malaria-infected stage classification. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
Multi-test cervical cancer diagnosis with missing data estimation
NASA Astrophysics Data System (ADS)
Xu, Tao; Huang, Xiaolei; Kim, Edward; Long, L. Rodney; Antani, Sameer
2015-03-01
Cervical cancer is a leading most common type of cancer for women worldwide. Existing screening programs for cervical cancer suffer from low sensitivity. Using images of the cervix (cervigrams) as an aid in detecting pre-cancerous changes to the cervix has good potential to improve sensitivity and help reduce the number of cervical cancer cases. In this paper, we present a method that utilizes multi-modality information extracted from multiple tests of a patient's visit to classify the patient visit to be either low-risk or high-risk. Our algorithm integrates image features and text features to make a diagnosis. We also present two strategies to estimate the missing values in text features: Image Classifier Supervised Mean Imputation (ICSMI) and Image Classifier Supervised Linear Interpolation (ICSLI). We evaluate our method on a large medical dataset and compare it with several alternative approaches. The results show that the proposed method with ICSLI strategy achieves the best result of 83.03% specificity and 76.36% sensitivity. When higher specificity is desired, our method can achieve 90% specificity with 62.12% sensitivity.
Comparison of Texture Features Used for Classification of Life Stages of Malaria Parasite.
Bairagi, Vinayak K; Charpe, Kshipra C
2016-01-01
Malaria is a vector borne disease widely occurring at equatorial region. Even after decades of campaigning of malaria control, still today it is high mortality causing disease due to improper and late diagnosis. To prevent number of people getting affected by malaria, the diagnosis should be in early stage and accurate. This paper presents an automatic method for diagnosis of malaria parasite in the blood images. Image processing techniques are used for diagnosis of malaria parasite and to detect their stages. The diagnosis of parasite stages is done using features like statistical features and textural features of malaria parasite in blood images. This paper gives a comparison of the textural based features individually used and used in group together. The comparison is made by considering the accuracy, sensitivity, and specificity of the features for the same images in database.
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
NASA Astrophysics Data System (ADS)
Shinde, Anant; Perinchery, Sandeep Menon; Murukeshan, Vadakke Matham
2017-04-01
An optical imaging probe with targeted multispectral and spatiotemporal illumination features has applications in many diagnostic biomedical studies. However, these systems are mostly adapted in conventional microscopes, limiting their use for in vitro applications. We present a variable resolution imaging probe using a digital micromirror device (DMD) with an achievable maximum lateral resolution of 2.7 μm and an axial resolution of 5.5 μm, along with precise shape selective targeted illumination ability. We have demonstrated switching of different wavelengths to image multiple regions in the field of view. Moreover, the targeted illumination feature allows enhanced image contrast by time averaged imaging of selected regions with different optical exposure. The region specific multidirectional scanning feature of this probe has facilitated high speed targeted confocal imaging.
Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images
NASA Astrophysics Data System (ADS)
Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan
2017-08-01
Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Lakhman, Yulia; Veeraraghavan, Harini; Chaim, Joshua; Feier, Diana; Goldman, Debra A; Moskowitz, Chaya S; Nougaret, Stephanie; Sosa, Ramon E; Vargas, Hebert Alberto; Soslow, Robert A; Abu-Rustum, Nadeem R; Hricak, Hedvig; Sala, Evis
2017-07-01
To investigate whether qualitative magnetic resonance (MR) features can distinguish leiomyosarcoma (LMS) from atypical leiomyoma (ALM) and assess the feasibility of texture analysis (TA). This retrospective study included 41 women (ALM = 22, LMS = 19) imaged with MRI prior to surgery. Two readers (R1, R2) evaluated each lesion for qualitative MR features. Associations between MR features and LMS were evaluated with Fisher's exact test. Accuracy measures were calculated for the four most significant features. TA was performed for 24 patients (ALM = 14, LMS = 10) with uniform imaging following lesion segmentation on axial T2-weighted images. Texture features were pre-selected using Wilcoxon signed-rank test with Bonferroni correction and analyzed with unsupervised clustering to separate LMS from ALM. Four qualitative MR features most strongly associated with LMS were nodular borders, haemorrhage, "T2 dark" area(s), and central unenhanced area(s) (p ≤ 0.0001 each feature/reader). The highest sensitivity [1.00 (95%CI:0.82-1.00)/0.95 (95%CI: 0.74-1.00)] and specificity [0.95 (95%CI:0.77-1.00)/1.00 (95%CI:0.85-1.00)] were achieved for R1/R2, respectively, when a lesion had ≥3 of these four features. Sixteen texture features differed significantly between LMS and ALM (p-values: <0.001-0.036). Unsupervised clustering achieved accuracy of 0.75 (sensitivity: 0.70; specificity: 0.79). Combination of ≥3 qualitative MR features accurately distinguished LMS from ALM. TA was feasible. • Four qualitative MR features demonstrated the strongest statistical association with LMS. • Combination of ≥3 these features could accurately differentiate LMS from ALM. • Texture analysis was a feasible semi-automated approach for lesion categorization.
BCC skin cancer diagnosis based on texture analysis techniques
NASA Astrophysics Data System (ADS)
Chuang, Shao-Hui; Sun, Xiaoyan; Chang, Wen-Yu; Chen, Gwo-Shing; Huang, Adam; Li, Jiang; McKenzie, Frederic D.
2011-03-01
In this paper, we present a texture analysis based method for diagnosing the Basal Cell Carcinoma (BCC) skin cancer using optical images taken from the suspicious skin regions. We first extracted the Run Length Matrix and Haralick texture features from the images and used a feature selection algorithm to identify the most effective feature set for the diagnosis. We then utilized a Multi-Layer Perceptron (MLP) classifier to classify the images to BCC or normal cases. Experiments showed that detecting BCC cancer based on optical images is feasible. The best sensitivity and specificity we achieved on our data set were 94% and 95%, respectively.
Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A
2011-01-01
Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093
NASA Astrophysics Data System (ADS)
Xiong, Wei; Qiu, Bo; Tian, Qi; Mueller, Henning; Xu, Changsheng
2005-04-01
Medical image retrieval is still mainly a research domain with a large variety of applications and techniques. With the ImageCLEF 2004 benchmark, an evaluation framework has been created that includes a database, query topics and ground truth data. Eleven systems (with a total of more than 50 runs) compared their performance in various configurations. The results show that there is not any one feature that performs well on all query tasks. Key to successful retrieval is rather the selection of features and feature weights based on a specific set of input features, thus on the query task. In this paper we propose a novel method based on query topic dependent image features (QTDIF) for content-based medical image retrieval. These feature sets are designed to capture both inter-category and intra-category statistical variations to achieve good retrieval performance in terms of recall and precision. We have used Gaussian Mixture Models (GMM) and blob representation to model medical images and construct the proposed novel QTDIF for CBIR. Finally, trained multi-class support vector machines (SVM) are used for image similarity ranking. The proposed methods have been tested over the Casimage database with around 9000 images, for the given 26 image topics, used for imageCLEF 2004. The retrieval performance has been compared with the medGIFT system, which is based on the GNU Image Finding Tool (GIFT). The experimental results show that the proposed QTDIF-based CBIR can provide significantly better performance than systems based general features only.
A two-view ultrasound CAD system for spina bifida detection using Zernike features
NASA Astrophysics Data System (ADS)
Konur, Umut; Gürgen, Fikret; Varol, Füsun
2011-03-01
In this work, we address a very specific CAD (Computer Aided Detection/Diagnosis) problem and try to detect one of the relatively common birth defects - spina bifida, in the prenatal period. To do this, fetal ultrasound images are used as the input imaging modality, which is the most convenient so far. Our approach is to decide using two particular types of views of the fetal neural tube. Transcerebellar head (i.e. brain) and transverse (axial) spine images are processed to extract features which are then used to classify healthy (normal), suspicious (probably defective) and non-decidable cases. Decisions raised by two independent classifiers may be individually treated, or if desired and data related to both modalities are available, those decisions can be combined to keep matters more secure. Even more security can be attained by using more than two modalities and base the final decision on all those potential classifiers. Our current system relies on feature extraction from images for cases (for particular patients). The first step is image preprocessing and segmentation to get rid of useless image pixels and represent the input in a more compact domain, which is hopefully more representative for good classification performance. Next, a particular type of feature extraction, which uses Zernike moments computed on either B/W or gray-scale image segments, is performed. The aim here is to obtain values for indicative markers that signal the presence of spina bifida. Markers differ depending on the image modality being used. Either shape or texture information captured by moments may propose useful features. Finally, SVM is used to train classifiers to be used as decision makers. Our experimental results show that a promising CAD system can be actualized for the specific purpose. On the other hand, the performance of such a system would highly depend on the qualities of image preprocessing, segmentation, feature extraction and comprehensiveness of image data.
NASA Astrophysics Data System (ADS)
Futia, Gregory L.; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A.
2016-04-01
Circulating tumor cell (CTC) identification has applications in both early detection and monitoring of solid cancers. The rarity of CTCs, expected at ~1-50 CTCs per million nucleated blood cells (WBCs), requires identifying methods based on biomarkers with high sensitivity and specificity for accurate identification. Discovery of biomarkers with ever higher sensitivity and specificity to CTCs is always desirable to potentially find more CTCs in cancer patients thus increasing their clinical utility. Here, we investigate quantitative image cytometry measurements of lipids with the biomarker panel of DNA, Cytokeratin (CK), and CD45 commonly used to identify CTCs. We engineered a device for labeling suspended cell samples with fluorescent antibodies and dyes. We used it to prepare samples for 4 channel confocal laser scanning microscopy. The total data acquired at high resolution from one sample is ~ 1.3 GB. We developed software to perform the automated segmentation of these images into regions of interest (ROIs) containing individual cells. We quantified image features of total signal, spatial second moment, spatial frequency second moment, and their product for each ROI. We performed measurements on pure WBCs, cancer cell line MCF7 and mixed samples. Multivariable regressions and feature selection were used to determine combination features that are more sensitive and specific than any individual feature separately. We also demonstrate that computation of spatial characteristics provides higher sensitivity and specificity than intensity alone. Statistical models allowed quantification of the required sensitivity and specificity for detecting small levels of CTCs in a human blood sample.
Chiu, Stephanie J; Toth, Cynthia A; Bowes Rickman, Catherine; Izatt, Joseph A; Farsiu, Sina
2012-05-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.
Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina
2012-01-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602
Oelze, Michael L; Mamou, Jonathan
2016-02-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation, and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years, QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient (BSC), estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter (ESD) and the effective acoustic concentration (EAC) of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and preclinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy.
Oelze, Michael L.; Mamou, Jonathan
2017-01-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging techniques can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient, estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter and the effective acoustic concentration of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and pre-clinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy. PMID:26761606
Caie, Peter D.; Zhou, Ying; Turnbull, Arran K.; Oniscu, Anca; Harrison, David J.
2016-01-01
A number of candidate histopathologic factors show promise in identifying stage II colorectal cancer (CRC) patients at a high risk of disease-specific death, however they can suffer from low reproducibility and none have replaced classical pathologic staging. We developed an image analysis algorithm which standardized the quantification of specific histopathologic features and exported a multi-parametric feature-set captured without bias. The image analysis algorithm was executed across a training set (n = 50) and the resultant big data was distilled through decision tree modelling to identify the most informative parameters to sub-categorize stage II CRC patients. The most significant, and novel, parameter identified was the ‘sum area of poorly differentiated clusters’ (AreaPDC). This feature was validated across a second cohort of stage II CRC patients (n = 134) (HR = 4; 95% CI, 1.5– 11). Finally, the AreaPDC was integrated with the significant features within the clinical pathology report, pT stage and differentiation, into a novel prognostic index (HR = 7.5; 95% CI, 3–18.5) which improved upon current clinical staging (HR = 4.26; 95% CI, 1.7– 10.3). The identification of poorly differentiated clusters as being highly significant in disease progression presents evidence to suggest that these features could be the source of novel targets to decrease the risk of disease specific death. PMID:27322148
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.
2016-10-01
In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.
Deep Learning in Medical Image Analysis
Shen, Dinggang; Wu, Guorong; Suk, Heung-Il
2016-01-01
The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements. PMID:28301734
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tixier, F; INSERM UMR1101 LaTIM, Brest; Cheze-Le-Rest, C
2015-06-15
Purpose: Several quantitative features can be extracted from 18F-FDG PET images, such as standardized uptake values (SUVs), metabolic tumor volume (MTV), shape characterization (SC) or intra-tumor radiotracer heterogeneity quantification (HQ). Some of these features calculated from baseline 18F-FDG PET images have shown a prognostic and predictive clinical value. It has been hypothesized that these features highlight underlying tumor patho-physiological processes at smaller scales. The objective of this study was to investigate the ability of recovering alterations of signaling pathways from FDG PET image-derived features. Methods: 52 patients were prospectively recruited from two medical centers (Brest and Poitiers). All patients underwentmore » an FDG PET scan for staging and biopsies of both healthy and primary tumor tissues. Biopsies went through a transcriptomic analysis performed in four spates on 4×44k chips (Agilent™). Primary tumors were delineated in the PET images using the Fuzzy Locally Adaptive Bayesian algorithm and characterized using 10 features including SUVs, SC and HQ. A module network algorithm followed by functional annotation was exploited in order to link PET features with signaling pathways alterations. Results: Several PET-derived features were found to discriminate differentially expressed genes between tumor and healthy tissue (fold-change >2, p<0.01) into 30 co-regulated groups (p<0.05). Functional annotations applied to these groups of genes highlighted associations with well-known pathways involved in cancer processes, such as cell proliferation and apoptosis, as well as with more specific ones such as unsaturated fatty acids. Conclusion: Quantitative features extracted from baseline 18F-FDG PET images usually exploited only for diagnosis and staging, were identified in this work as being related to specific altered pathways and may show promise as tools for personalizing treatment decisions.« less
Comparison of k-means related clustering methods for nuclear medicine images segmentation
NASA Astrophysics Data System (ADS)
Borys, Damian; Bzowski, Pawel; Danch-Wierzchowska, Marta; Psiuk-Maksymowicz, Krzysztof
2017-03-01
In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.
NASA Astrophysics Data System (ADS)
Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2010-03-01
Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.
Automatic detection of anomalies in screening mammograms
2013-01-01
Background Diagnostic performance in breast screening programs may be influenced by the prior probability of disease. Since breast cancer incidence is roughly half a percent in the general population there is a large probability that the screening exam will be normal. That factor may contribute to false negatives. Screening programs typically exhibit about 83% sensitivity and 91% specificity. This investigation was undertaken to determine if a system could be developed to pre-sort screening-images into normal and suspicious bins based on their likelihood to contain disease. Wavelets were investigated as a method to parse the image data, potentially removing confounding information. The development of a classification system based on features extracted from wavelet transformed mammograms is reported. Methods In the multi-step procedure images were processed using 2D discrete wavelet transforms to create a set of maps at different size scales. Next, statistical features were computed from each map, and a subset of these features was the input for a concerted-effort set of naïve Bayesian classifiers. The classifier network was constructed to calculate the probability that the parent mammography image contained an abnormality. The abnormalities were not identified, nor were they regionalized. The algorithm was tested on two publicly available databases: the Digital Database for Screening Mammography (DDSM) and the Mammographic Images Analysis Society’s database (MIAS). These databases contain radiologist-verified images and feature common abnormalities including: spiculations, masses, geometric deformations and fibroid tissues. Results The classifier-network designs tested achieved sensitivities and specificities sufficient to be potentially useful in a clinical setting. This first series of tests identified networks with 100% sensitivity and up to 79% specificity for abnormalities. This performance significantly exceeds the mean sensitivity reported in literature for the unaided human expert. Conclusions Classifiers based on wavelet-derived features proved to be highly sensitive to a range of pathologies, as a result Type II errors were nearly eliminated. Pre-sorting the images changed the prior probability in the sorted database from 37% to 74%. PMID:24330643
Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.
Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil
2018-01-25
Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.
IDH mutation assessment of glioma using texture features of multimodal MR images
NASA Astrophysics Data System (ADS)
Zhang, Xi; Tian, Qiang; Wu, Yu-Xia; Xu, Xiao-Pan; Li, Bao-Juan; Liu, Yi-Xiong; Liu, Yang; Lu, Hong-Bing
2017-03-01
Purpose: To 1) find effective texture features from multimodal MRI that can distinguish IDH mutant and wild status, and 2) propose a radiomic strategy for preoperatively detecting IDH mutation patients with glioma. Materials and Methods: 152 patients with glioma were retrospectively included from the Cancer Genome Atlas. Corresponding T1-weighted image before- and post-contrast, T2-weighted image and fluid-attenuation inversion recovery image from the Cancer Imaging Archive were analyzed. Specific statistical tests were applied to analyze the different kind of baseline information of LrGG patients. Finally, 168 texture features were derived from multimodal MRI per patient. Then the support vector machine-based recursive feature elimination (SVM-RFE) and classification strategy was adopted to find the optimal feature subset and build the identification models for detecting the IDH mutation. Results: Among 152 patients, 92 and 60 were confirmed to be IDH-wild and mutant, respectively. Statistical analysis showed that the patients without IDH mutation was significant older than patients with IDH mutation (p<0.01), and the distribution of some histological subtypes was significant different between IDH wild and mutant groups (p<0.01). After SVM-RFE, 15 optimal features were determined for IDH mutation detection. The accuracy, sensitivity, specificity, and AUC after SVM-RFE and parameter optimization were 82.2%, 85.0%, 78.3%, and 0.841, respectively. Conclusion: This study presented a radiomic strategy for noninvasively discriminating IDH mutation of patients with glioma. It effectively incorporated kinds of texture features from multimodal MRI, and SVM-based classification strategy. Results suggested that features selected from SVM-RFE were more potential to identifying IDH mutation. The proposed radiomics strategy could facilitate the clinical decision making in patients with glioma.
A short feature vector for image matching: The Log-Polar Magnitude feature descriptor
Hast, Anders; Wählby, Carolina; Sintorn, Ida-Maria
2017-01-01
The choice of an optimal feature detector-descriptor combination for image matching often depends on the application and the image type. In this paper, we propose the Log-Polar Magnitude feature descriptor—a rotation, scale, and illumination invariant descriptor that achieves comparable performance to SIFT on a large variety of image registration problems but with much shorter feature vectors. The descriptor is based on the Log-Polar Transform followed by a Fourier Transform and selection of the magnitude spectrum components. Selecting different frequency components allows optimizing for image patterns specific for a particular application. In addition, by relying only on coordinates of the found features and (optionally) feature sizes our descriptor is completely detector independent. We propose 48- or 56-long feature vectors that potentially can be shortened even further depending on the application. Shorter feature vectors result in better memory usage and faster matching. This combined with the fact that the descriptor does not require a time-consuming feature orientation estimation (the rotation invariance is achieved solely by using the magnitude spectrum of the Log-Polar Transform) makes it particularly attractive to applications with limited hardware capacity. Evaluation is performed on the standard Oxford dataset and two different microscopy datasets; one with fluorescence and one with transmission electron microscopy images. Our method performs better than SURF and comparable to SIFT on the Oxford dataset, and better than SIFT on both microscopy datasets indicating that it is particularly useful in applications with microscopy images. PMID:29190737
A new approach to modeling the influence of image features on fixation selection in scenes
Nuthmann, Antje; Einhäuser, Wolfgang
2015-01-01
Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. PMID:25752239
Learning to rank using user clicks and visual features for image retrieval.
Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong
2015-04-01
The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.
A multiparametric assay for quantitative nerve regeneration evaluation.
Weyn, B; van Remoortere, M; Nuydens, R; Meert, T; van de Wouwer, G
2005-08-01
We introduce an assay for the semi-automated quantification of nerve regeneration by image analysis. Digital images of histological sections of regenerated nerves are recorded using an automated inverted microscope and merged into high-resolution mosaic images representing the entire nerve. These are analysed by a dedicated image-processing package that computes nerve-specific features (e.g. nerve area, fibre count, myelinated area) and fibre-specific features (area, perimeter, myelin sheet thickness). The assay's performance and correlation of the automatically computed data with visually obtained data are determined on a set of 140 semithin sections from the distal part of a rat tibial nerve from four different experimental treatment groups (control, sham, sutured, cut) taken at seven different time points after surgery. Results show a high correlation between the manually and automatically derived data, and a high discriminative power towards treatment. Extra value is added by the large feature set. In conclusion, the assay is fast and offers data that currently can be obtained only by a combination of laborious and time-consuming tests.
Diagnosis of Tempromandibular Disorders Using Local Binary Patterns.
Haghnegahdar, A A; Kolahi, S; Khojastepour, L; Tajeripour, F
2018-03-01
Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages.
Plaque echodensity and textural features are associated with histologic carotid plaque instability.
Doonan, Robert J; Gorgui, Jessica; Veinot, Jean P; Lai, Chi; Kyriacou, Efthyvoulos; Corriveau, Marc M; Steinmetz, Oren K; Daskalopoulou, Stella S
2016-09-01
Carotid plaque echodensity and texture features predict cerebrovascular symptomatology. Our purpose was to determine the association of echodensity and textural features obtained from a digital image analysis (DIA) program with histologic features of plaque instability as well as to identify the specific morphologic characteristics of unstable plaques. Patients scheduled to undergo carotid endarterectomy were recruited and underwent carotid ultrasound imaging. DIA was performed to extract echodensity and textural features using Plaque Texture Analysis software (LifeQ Medical Ltd, Nicosia, Cyprus). Carotid plaque surgical specimens were obtained and analyzed histologically. Principal component analysis (PCA) was performed to reduce imaging variables. Logistic regression models were used to determine if PCA variables and individual imaging variables predicted histologic features of plaque instability. Image analysis data from 160 patients were analyzed. Individual imaging features of plaque echolucency and homogeneity were associated with a more unstable plaque phenotype on histology. These results were independent of age, sex, and degree of carotid stenosis. PCA reduced 39 individual imaging variables to five PCA variables. PCA1 and PCA2 were significantly associated with overall plaque instability on histology (both P = .02), whereas PCA3 did not achieve statistical significance (P = .07). DIA features of carotid plaques are associated with histologic plaque instability as assessed by multiple histologic features. Importantly, unstable plaques on histology appear more echolucent and homogeneous on ultrasound imaging. These results are independent of stenosis, suggesting that image analysis may have a role in refining the selection of patients who undergo carotid endarterectomy. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Breed-Specific Magnetic Resonance Imaging Characteristics of Necrotizing Encephalitis in Dogs
Flegel, Thomas
2017-01-01
Diagnosing necrotizing encephalitis, with its subcategories of necrotizing leukoencephalitis and necrotizing meningoencephalitis, based on magnetic resonance imaging alone can be challenging. However, there are breed-specific imaging characteristics in both subcategories that allow establishing a clinical diagnosis with a relatively high degree of certainty. Typical breed specific imaging features, such as lesion distribution, signal intensity, contrast enhancement, and gross changes of brain structure (midline shift, ventriculomegaly, and brain herniation) are summarized here, using current literature, for the most commonly affected canine breeds: Yorkshire Terrier, French Bulldog, Pug, and Chihuahua. PMID:29255715
Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A
2017-07-01
Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Li, Hai; Kumavor, Patrick; Salman Alqasemi, Umar; Zhu, Quing
2015-01-01
A composite set of ovarian tissue features extracted from photoacoustic spectral data, beam envelope, and co-registered ultrasound and photoacoustic images are used to characterize malignant and normal ovaries using logistic and support vector machine (SVM) classifiers. Normalized power spectra were calculated from the Fourier transform of the photoacoustic beamformed data, from which the spectral slopes and 0-MHz intercepts were extracted. Five features were extracted from the beam envelope and another 10 features were extracted from the photoacoustic images. These 17 features were ranked by their p-values from t-tests on which a filter type of feature selection method was used to determine the optimal feature number for final classification. A total of 169 samples from 19 ex vivo ovaries were randomly distributed into training and testing groups. Both classifiers achieved a minimum value of the mean misclassification error when the seven features with lowest p-values were selected. Using these seven features, the logistic and SVM classifiers obtained sensitivities of 96.39±3.35% and 97.82±2.26%, and specificities of 98.92±1.39% and 100%, respectively, for the training group. For the testing group, logistic and SVM classifiers achieved sensitivities of 92.71±3.55% and 92.64±3.27%, and specificities of 87.52±8.78% and 98.49±2.05%, respectively.
NASA Astrophysics Data System (ADS)
Li, Hai; Kumavor, Patrick D.; Alqasemi, Umar; Zhu, Quing
2014-03-01
Human ovarian tissue features extracted from photoacoustic spectra data, beam envelopes and co-registered ultrasound and photoacoustic images are used to characterize cancerous vs. normal processes using a support vector machine (SVM) classifier. The centers of suspicious tumor areas are estimated from the Gaussian fitting of the mean Radon transforms of the photoacoustic image along 0 and 90 degrees. Normalized power spectra are calculated using the Fourier transform of the photoacoustic beamformed data across these suspicious areas, where the spectral slope and 0-MHz intercepts are extracted. Image statistics, envelope histogram fitting and maximum output of 6 composite filters of cancerous or normal patterns along with other previously used features are calculated to compose a total of 17 features. These features are extracted from 169 datasets of 19 ex vivo ovaries. Half of the cancerous and normal datasets are randomly chosen to train a SVM classifier with polynomial kernel and the remainder is used for testing. With 50 times data resampling, the SVM classifier, for the training group, gives 100% sensitivity and 100% specificity. For the testing group, it gives 89.68+/- 6.37% sensitivity and 93.16+/- 3.70% specificity. These results are superior to those obtained earlier by our group using features extracted from photoacoustic raw data or image statistics only.
Automated detection of retinal whitening in malarial retinopathy
NASA Astrophysics Data System (ADS)
Joshi, V.; Agurto, C.; Barriga, S.; Nemeth, S.; Soliz, P.; MacCormick, I.; Taylor, T.; Lewallen, S.; Harding, S.
2016-03-01
Cerebral malaria (CM) is a severe neurological complication associated with malarial infection. Malaria affects approximately 200 million people worldwide, and claims 600,000 lives annually, 75% of whom are African children under five years of age. Because most of these mortalities are caused by the high incidence of CM misdiagnosis, there is a need for an accurate diagnostic to confirm the presence of CM. The retinal lesions associated with malarial retinopathy (MR) such as retinal whitening, vessel discoloration, and hemorrhages, are highly specific to CM, and their detection can improve the accuracy of CM diagnosis. This paper will focus on development of an automated method for the detection of retinal whitening which is a unique sign of MR that manifests due to retinal ischemia resulting from CM. We propose to detect the whitening region in retinal color images based on multiple color and textural features. First, we preprocess the image using color and textural features of the CMYK and CIE-XYZ color spaces to minimize camera reflex. Next, we utilize color features of the HSL, CMYK, and CIE-XYZ channels, along with the structural features of difference of Gaussians. A watershed segmentation algorithm is used to assign each image region a probability of being inside the whitening, based on extracted features. The algorithm was applied to a dataset of 54 images (40 with whitening and 14 controls) that resulted in an image-based (binary) classification with an AUC of 0.80. This provides 88% sensitivity at a specificity of 65%. For a clinical application that requires a high specificity setting, the algorithm can be tuned to a specificity of 89% at a sensitivity of 82%. This is the first published method for retinal whitening detection and combining it with the detection methods for vessel discoloration and hemorrhages can further improve the detection accuracy for malarial retinopathy.
Detection and clustering of features in aerial images by neuron network-based algorithm
NASA Astrophysics Data System (ADS)
Vozenilek, Vit
2015-12-01
The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.
Ordinal measures for iris recognition.
Sun, Zhenan; Tan, Tieniu
2009-12-01
Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.
NASA Astrophysics Data System (ADS)
Alvandipour, Mehrdad; Umbaugh, Scott E.; Mishra, Deependra K.; Dahal, Rohini; Lama, Norsang; Marino, Dominic J.; Sackman, Joseph
2017-05-01
Thermography and pattern classification techniques are used to classify three different pathologies in veterinary images. Thermographic images of both normal and diseased animals were provided by the Long Island Veterinary Specialists (LIVS). The three pathologies are ACL rupture disease, bone cancer, and feline hyperthyroid. The diagnosis of these diseases usually involves radiology and laboratory tests while the method that we propose uses thermographic images and image analysis techniques and is intended for use as a prescreening tool. Images in each category of pathologies are first filtered by Gabor filters and then various features are extracted and used for classification into normal and abnormal classes. Gabor filters are linear filters that can be characterized by the two parameters wavelength λ and orientation θ. With two different wavelength and five different orientations, a total of ten different filters were studied. Different combinations of camera views, filters, feature vectors, normalization methods, and classification methods, produce different tests that were examined and the sensitivity, specificity and success rate for each test were produced. Using the Gabor features alone, sensitivity, specificity, and overall success rates of 85% for each of the pathologies was achieved.
Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features
NASA Astrophysics Data System (ADS)
Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija
2017-04-01
We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.
NASA Astrophysics Data System (ADS)
Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.
2009-02-01
Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.
Cross-Modal Retrieval With CNN Visual Features: A New Baseline.
Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng
2017-02-01
Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.
Zhang, Ming-Huan; Ma, Jun-Shan; Shen, Ying; Chen, Ying
2016-09-01
This study aimed to investigate the optimal support vector machines (SVM)-based classifier of duchenne muscular dystrophy (DMD) magnetic resonance imaging (MRI) images. T1-weighted (T1W) and T2-weighted (T2W) images of the 15 boys with DMD and 15 normal controls were obtained. Textural features of the images were extracted and wavelet decomposed, and then, principal features were selected. Scale transform was then performed for MRI images. Afterward, SVM-based classifiers of MRI images were analyzed based on the radical basis function and decomposition levels. The cost (C) parameter and kernel parameter [Formula: see text] were used for classification. Then, the optimal SVM-based classifier, expressed as [Formula: see text]), was identified by performance evaluation (sensitivity, specificity and accuracy). Eight of 12 textural features were selected as principal features (eigenvalues [Formula: see text]). The 16 SVM-based classifiers were obtained using combination of (C, [Formula: see text]), and those with lower C and [Formula: see text] values showed higher performances, especially classifier of [Formula: see text]). The SVM-based classifiers of T1W images showed higher performance than T1W images at the same decomposition level. The T1W images in classifier of [Formula: see text]) at level 2 decomposition showed the highest performance of all, and its overall correct sensitivity, specificity, and accuracy reached 96.9, 97.3, and 97.1 %, respectively. The T1W images in SVM-based classifier [Formula: see text] at level 2 decomposition showed the highest performance of all, demonstrating that it was the optimal classification for the diagnosis of DMD.
Automatic brain MR image denoising based on texture feature-based artificial neural networks.
Chang, Yu-Ning; Chang, Herng-Hua
2015-01-01
Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.
Image annotation based on positive-negative instances learning
NASA Astrophysics Data System (ADS)
Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.
Associative memory model for searching an image database by image snippet
NASA Astrophysics Data System (ADS)
Khan, Javed I.; Yun, David Y.
1994-09-01
This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.
Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S
2017-10-01
The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.
MO-AB-BRA-10: Cancer Therapy Outcome Prediction Based On Dempster-Shafer Theory and PET Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, C; University of Rouen, QuantIF - EA 4108 LITIS, 76000 Rouen; Li, H
2015-06-15
Purpose: In cancer therapy, utilizing FDG-18 PET image-based features for accurate outcome prediction is challenging because of 1) limited discriminative information within a small number of PET image sets, and 2) fluctuant feature characteristics caused by the inferior spatial resolution and system noise of PET imaging. In this study, we proposed a new Dempster-Shafer theory (DST) based approach, evidential low-dimensional transformation with feature selection (ELT-FS), to accurately predict cancer therapy outcome with both PET imaging features and clinical characteristics. Methods: First, a specific loss function with sparse penalty was developed to learn an adaptive low-rank distance metric for representing themore » dissimilarity between different patients’ feature vectors. By minimizing this loss function, a linear low-dimensional transformation of input features was achieved. Also, imprecise features were excluded simultaneously by applying a l2,1-norm regularization of the learnt dissimilarity metric in the loss function. Finally, the learnt dissimilarity metric was applied in an evidential K-nearest-neighbor (EK- NN) classifier to predict treatment outcome. Results: Twenty-five patients with stage II–III non-small-cell lung cancer and thirty-six patients with esophageal squamous cell carcinomas treated with chemo-radiotherapy were collected. For the two groups of patients, 52 and 29 features, respectively, were utilized. The leave-one-out cross-validation (LOOCV) protocol was used for evaluation. Compared to three existing linear transformation methods (PCA, LDA, NCA), the proposed ELT-FS leads to higher prediction accuracy for the training and testing sets both for lung-cancer patients (100+/−0.0, 88.0+/−33.17) and for esophageal-cancer patients (97.46+/−1.64, 83.33+/−37.8). The ELT-FS also provides superior class separation in both test data sets. Conclusion: A novel DST- based approach has been proposed to predict cancer treatment outcome using PET image features and clinical characteristics. A specific loss function has been designed for robust accommodation of feature set incertitude and imprecision, facilitating adaptive learning of the dissimilarity metric for the EK-NN classifier.« less
NASA Astrophysics Data System (ADS)
Hancock, Matthew C.; Magnan, Jerry F.
2017-03-01
To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capabilities of statistical learning methods for classifying nodule malignancy, utilizing the Lung Image Database Consortium (LIDC) dataset, and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that is achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (+/-1.14)% which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (+/-0.012), which increases to 0.949 (+/-0.007) when diameter and volume features are included, along with the accuracy to 88.08 (+/-1.11)%. Our results are comparable to those in the literature that use algorithmically-derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features, and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.
Recognizing human activities using appearance metric feature and kinematics feature
NASA Astrophysics Data System (ADS)
Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye
2017-05-01
The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.
Ambrus, Géza Gergely; Dotzer, Maria; Schweinberger, Stefan R; Kovács, Gyula
2017-12-01
Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a role of the right occipital face area (rOFA) in early facial feature processing. However, the degree to which rOFA is necessary for the encoding of facial identity has been less clear. Here we used a state-dependent TMS paradigm, where stimulation preferentially facilitates attributes encoded by less active neural populations, to investigate the role of the rOFA in face perception and specifically in image-independent identity processing. Participants performed a familiarity decision task for famous and unknown target faces, preceded by brief (200 ms) or longer (3500 ms) exposures to primes which were either an image of a different identity (DiffID), another image of the same identity (SameID), the same image (SameIMG), or a Fourier-randomized noise pattern (NOISE) while either the rOFA or the vertex as control was stimulated by single-pulse TMS. Strikingly, TMS to the rOFA eliminated the advantage of SameID over DiffID condition, thereby disrupting identity-specific priming, while leaving image-specific priming (better performance for SameIMG vs. SameID) unaffected. Our results suggest that the role of rOFA is not limited to low-level feature processing, and emphasize its role in image-independent facial identity processing and the formation of identity-specific memory traces.
ERIC Educational Resources Information Center
Chung, EunKyung; Yoon, JungWon
2009-01-01
Introduction: The purpose of this study is to compare characteristics and features of user supplied tags and search query terms for images on the "Flickr" Website in terms of categories of pictorial meanings and level of term specificity. Method: This study focuses on comparisons between tags and search queries using Shatford's categorization…
Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.
Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe
2018-06-02
This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.
2015-01-01
Retinal fundus images are widely used in diagnosing and providing treatment for several eye diseases. Prior works using retinal fundus images detected the presence of exudation with the aid of publicly available dataset using extensive segmentation process. Though it was proved to be computationally efficient, it failed to create a diabetic retinopathy feature selection system for transparently diagnosing the disease state. Also the diagnosis of diseases did not employ machine learning methods to categorize candidate fundus images into true positive and true negative ratio. Several candidate fundus images did not include more detailed feature selection technique for diabetic retinopathy. To apply machine learning methods and classify the candidate fundus images on the basis of sliding window a method called, Diabetic Fundus Image Recuperation (DFIR) is designed in this paper. The initial phase of DFIR method select the feature of optic cup in digital retinal fundus images based on Sliding Window Approach. With this, the disease state for diabetic retinopathy is assessed. The feature selection in DFIR method uses collection of sliding windows to obtain the features based on the histogram value. The histogram based feature selection with the aid of Group Sparsity Non-overlapping function provides more detailed information of features. Using Support Vector Model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy diseases. The ranking of disease level for each candidate set provides a much promising result for developing practically automated diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, specificity rate, ranking efficiency and feature selection time. PMID:25974230
Shu, Ting; Zhang, Bob; Yan Tang, Yuan
2017-04-01
Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua
2014-01-01
To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer.
Diagnosis of Tempromandibular Disorders Using Local Binary Patterns
Haghnegahdar, A.A.; Kolahi, S.; Khojastepour, L.; Tajeripour, F.
2018-01-01
Background: Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. Material and Methods: CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. Results: K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. Conclusion: We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages. PMID:29732343
Image standards in tissue-based diagnosis (diagnostic surgical pathology).
Kayser, Klaus; Görtler, Jürgen; Goldmann, Torsten; Vollmer, Ekkehard; Hufnagl, Peter; Kayser, Gian
2008-04-18
Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. THEORY AND EXPERIENCES: Images used in tissue-based diagnosis present with pathology-specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease-image combination, human-diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image acquisition systems (resolution, colour temperature, focus, brightness, and quality evaluation procedures), display resolution data, implemented image formats, storage, cycle frequency, backup procedures, operation system, and external system accessibility. The lowest third level describes the permitted limits and threshold in detail. At present, an applicable standard including all mentioned features does not exist to our knowledge; some aspects can be taken from radiological standards (PACS, DICOM 3); others require specific solutions or are not covered yet. The progress in virtual microscopy and application of artificial intelligence (AI) in tissue-based diagnosis demands fast preparation and implementation of an internationally acceptable standard. The described hierarchic order as well as analytic investigation in all potentially necessary aspects and details offers an appropriate tool to specifically determine standardized requirements.
NASA Astrophysics Data System (ADS)
Salehi, Hassan S.; Li, Hai; Merkulov, Alex; Kumavor, Patrick D.; Vavadi, Hamed; Sanders, Melinda; Kueck, Angela; Brewer, Molly A.; Zhu, Quing
2016-04-01
Most ovarian cancers are diagnosed at advanced stages due to the lack of efficacious screening techniques. Photoacoustic tomography (PAT) has a potential to image tumor angiogenesis and detect early neovascular changes of the ovary. We have developed a coregistered PAT and ultrasound (US) prototype system for real-time assessment of ovarian masses. Features extracted from PAT and US angular beams, envelopes, and images were input to a logistic classifier and a support vector machine (SVM) classifier to diagnose ovaries as benign or malignant. A total of 25 excised ovaries of 15 patients were studied and the logistic and SVM classifiers achieved sensitivities of 70.4 and 87.7%, and specificities of 95.6 and 97.9%, respectively. Furthermore, the ovaries of two patients were noninvasively imaged using the PAT/US system before surgical excision. By using five significant features and the logistic classifier, 12 out of 14 images (86% sensitivity) from a malignant ovarian mass and all 17 images (100% specificity) from a benign mass were accurately classified; the SVM correctly classified 10 out of 14 malignant images (71% sensitivity) and all 17 benign images (100% specificity). These initial results demonstrate the clinical potential of the PAT/US technique for ovarian cancer diagnosis.
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
Farhan, Saima; Fahiem, Muhammad Abuzar; Tauseef, Huma
2014-01-01
Structural brain imaging is playing a vital role in identification of changes that occur in brain associated with Alzheimer's disease. This paper proposes an automated image processing based approach for the identification of AD from MRI of the brain. The proposed approach is novel in a sense that it has higher specificity/accuracy values despite the use of smaller feature set as compared to existing approaches. Moreover, the proposed approach is capable of identifying AD patients in early stages. The dataset selected consists of 85 age and gender matched individuals from OASIS database. The features selected are volume of GM, WM, and CSF and size of hippocampus. Three different classification models (SVM, MLP, and J48) are used for identification of patients and controls. In addition, an ensemble of classifiers, based on majority voting, is adopted to overcome the error caused by an independent base classifier. Ten-fold cross validation strategy is applied for the evaluation of our scheme. Moreover, to evaluate the performance of proposed approach, individual features and combination of features are fed to individual classifiers and ensemble based classifier. Using size of left hippocampus as feature, the accuracy achieved with ensemble of classifiers is 93.75%, with 100% specificity and 87.5% sensitivity.
Image manipulation as research misconduct.
Parrish, Debra; Noonan, Bridget
2009-06-01
A growing number of research misconduct cases handled by the Office of Research Integrity involve image manipulations. Manipulations may include simple image enhancements, misrepresenting an image as something different from what it is, and altering specific features of an image. Through a study of specific cases, the misconduct findings associated with image manipulation, detection methods and those likely to identify such manipulations, are discussed. This article explores sanctions imposed against guilty researchers and the factors that resulted in no misconduct finding although relevant images clearly were flawed. Although new detection tools are available for universities and journals to detect questionable images, this article explores why these tools have not been embraced.
Computer vision applications for coronagraphic optical alignment and image processing.
Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A
2013-05-10
Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.
Classification of skin cancer images using local binary pattern and SVM classifier
NASA Astrophysics Data System (ADS)
Adjed, Faouzi; Faye, Ibrahima; Ababsa, Fakhreddine; Gardezi, Syed Jamal; Dass, Sarat Chandra
2016-11-01
In this paper, a classification method for melanoma and non-melanoma skin cancer images has been presented using the local binary patterns (LBP). The LBP computes the local texture information from the skin cancer images, which is later used to compute some statistical features that have capability to discriminate the melanoma and non-melanoma skin tissues. Support vector machine (SVM) is applied on the feature matrix for classification into two skin image classes (malignant and benign). The method achieves good classification accuracy of 76.1% with sensitivity of 75.6% and specificity of 76.7%.
Automatic detection of solar features in HSOS full-disk solar images using guided filter
NASA Astrophysics Data System (ADS)
Yuan, Fei; Lin, Jiaben; Guo, Jingjing; Wang, Gang; Tong, Liyue; Zhang, Xinwei; Wang, Bingxiang
2018-02-01
A procedure is introduced for the automatic detection of solar features using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. Guided filter is adopted to enhance the edges of solar features and restrain the solar limb darkening, which is first introduced into the astronomical target detection. Then specific features are detected by Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedures, our procedure has some advantages such as real time and reliability as well as no need of local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result shows that the number of features detected by our procedure is well consistent with the manual one.
NASA Astrophysics Data System (ADS)
Suciati, Nanik; Herumurti, Darlis; Wijaya, Arya Yudhi
2017-02-01
Batik is one of Indonesian's traditional cloth. Motif or pattern drawn on a piece of batik fabric has a specific name and philosopy. Although batik cloths are widely used in everyday life, but only few people understand its motif and philosophy. This research is intended to develop a batik motif recognition system which can be used to identify motif of Batik image automatically. First, a batik image is decomposed into sub-images using wavelet transform. Six texture descriptors, i.e. max probability, correlation, contrast, uniformity, homogenity and entropy, are extracted from gray-level co-occurrence matrix of each sub-image. The texture features are then matched to the template features using canberra distance. The experiment is performed on Batik Dataset consisting of 1088 batik images grouped into seven motifs. The best recognition rate, that is 92,1%, is achieved using feature extraction process with 5 level wavelet decomposition and 4 directional gray-level co-occurrence matrix.
NASA Astrophysics Data System (ADS)
Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.
2016-10-01
Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.
Niederhauser, Blake D; Spinner, Robert J; Jentoft, Mark E; Everist, Brian M; Matsumoto, Jane M; Amrami, Kimberly K
2013-04-01
To describe imaging characteristics of neuromuscular choristomas (NMC) and to differentiate them from fibrolipomatous hamartomas (FLH). Clinical and imaging characteristics of six patients with biopsy-proven NMC and six patients with FLH were reviewed by musculoskeletal, a pediatric, and two in-training radiologists with a literature review to define typical magnetic resonance imaging features by consensus. Five radiology trainees blinded to cases and naive to the diagnosis of NMC and a musculoskeletal-trained radiologist rated each lesion as having more than or less than 50% intralesional fat, as well as an overall impression using axial T1 images. Sensitivity, specificity, accuracy, and interobserver agreement kappa were determined. Typical features of NMC include smoothly tapering, fusiform enlargement of the sciatic nerve or brachial plexus elements with T1 and T2 signal characteristics closely following those of muscle. Longitudinal bands of intervening low T1 and T2 signal were often present and likely corresponded to fibrous tissue by pathology. Four of five patients with long-term follow-up (80%) developed aggressive fibromatosis after percutaneous or surgical biopsy. Nerve fascicle thickening often resulted in a "coaxial cable" appearance similar to classic FLH, however, using a cutoff of <50% intralesional fat allowed for differentiation with 100% sensitivity by all reviewers and 100% specificity when all imaging features were utilized for impressions. Agreement was excellent with all differentiating methods (kappa 0.861-1.0). NMC can be confidently differentiated from FLH and malignancies using characteristic imaging and clinical features. When a diagnosis is made, biopsy should be avoided given frequent complication by aggressive fibromatosis.
Su, Hang; Yin, Zhaozheng; Huh, Seungil; Kanade, Takeo
2013-10-01
Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches. Copyright © 2013 Elsevier B.V. All rights reserved.
Microscopic medical image classification framework via deep learning and shearlet transform.
Rezaeilouyeh, Hadi; Mollahosseini, Ali; Mahoor, Mohammad H
2016-10-01
Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape. An alternative approach is to use convolutional neural networks (CNNs) to learn the most appropriate feature abstractions directly from the data and handle the limitations of hand-crafted features. A framework for breast cancer detection and prostate Gleason grading using CNN trained on images along with the magnitude and phase of shearlet coefficients is presented. Particularly, we apply shearlet transform on images and extract the magnitude and phase of shearlet coefficients. Then we feed shearlet features along with the original images to our CNN consisting of multiple layers of convolution, max pooling, and fully connected layers. Our experiments show that using the magnitude and phase of shearlet coefficients as extra information to the network can improve the accuracy of detection and generalize better compared to the state-of-the-art methods that rely on hand-crafted features. This study expands the application of deep neural networks into the field of medical image analysis, which is a difficult domain considering the limited medical data available for such analysis.
The effect of defect cluster size and interpolation on radiographic image quality
NASA Astrophysics Data System (ADS)
Töpfer, Karin; Yip, Kwok L.
2011-03-01
For digital X-ray detectors, the need to control factory yield and cost invariably leads to the presence of some defective pixels. Recently, a standard procedure was developed to identify such pixels for industrial applications. However, no quality standards exist in medical or industrial imaging regarding the maximum allowable number and size of detector defects. While the answer may be application specific, the minimum requirement for any defect specification is that the diagnostic quality of the images be maintained. A more stringent criterion is to keep any changes in the images due to defects below the visual threshold. Two highly sensitive image simulation and evaluation methods were employed to specify the fraction of allowable defects as a function of defect cluster size in general radiography. First, the most critical situation of the defect being located in the center of the disease feature was explored using image simulation tools and a previously verified human observer model, incorporating a channelized Hotelling observer. Detectability index d' was obtained as a function of defect cluster size for three different disease features on clinical lung and extremity backgrounds. Second, four concentrations of defects of four different sizes were added to clinical images with subtle disease features and then interpolated. Twenty observers evaluated the images against the original on a single display using a 2-AFC method, which was highly sensitive to small changes in image detail. Based on a 50% just-noticeable difference, the fraction of allowed defects was specified vs. cluster size.
Manichon, Anne-Frédérique; Bancel, Brigitte; Durieux-Millon, Marion; Ducerf, Christian; Mabrut, Jean-Yves; Lepogam, Marie-Annick; Rode, Agnès
2012-01-01
Purpose. To review the contrast-enhanced ultrasonographic (CEUS) and magnetic resonance (MR) imaging findings in 25 patients with 26 hepatocellular adenomas (HCAs) and to compare imaging features with histopathologic results from resected specimen considering the new immunophenotypical classification. Material and Methods. Two abdominal radiologists reviewed retrospectively CEUS cineloops and MR images in 26 HCA. All pathological specimens were reviewed and classified into four subgroups (steatotic or HNF 1α mutated, inflammatory, atypical or β-catenin mutated, and unspecified). Inflammatory infiltrates were scored, steatosis, and telangiectasia semiquantitatively evaluated. Results. CEUS and MRI features are well correlated: among the 16 inflammatory HCA, 7/16 presented typical imaging features: hypersignal T2, strong arterial enhancement with a centripetal filling, persistent on delayed phase. 6 HCA were classified as steatotic with typical imaging features: a drop out signal, slight arterial enhancement, vanishing on late phase. Four HCA were classified as atypical with an HCC developed in one. Five lesions displayed important steatosis (>50%) without belonging to the HNF1α group. Conclusion. In half cases, inflammatory HCA have specific imaging features well correlated with the amount of telangiectasia and inflammatory infiltrates. An HCA with important amount of steatosis noticed on chemical shift images does not always belong to the HNF1α group. PMID:22811588
Pulmonary nodule characterization, including computer analysis and quantitative features.
Bartholmai, Brian J; Koo, Chi Wan; Johnson, Geoffrey B; White, Darin B; Raghunath, Sushravya M; Rajagopalan, Srinivasan; Moynagh, Michael R; Lindell, Rebecca M; Hartman, Thomas E
2015-03-01
Pulmonary nodules are commonly detected in computed tomography (CT) chest screening of a high-risk population. The specific visual or quantitative features on CT or other modalities can be used to characterize the likelihood that a nodule is benign or malignant. Visual features on CT such as size, attenuation, location, morphology, edge characteristics, and other distinctive "signs" can be highly suggestive of a specific diagnosis and, in general, be used to determine the probability that a specific nodule is benign or malignant. Change in size, attenuation, and morphology on serial follow-up CT, or features on other modalities such as nuclear medicine studies or MRI, can also contribute to the characterization of lung nodules. Imaging analytics can objectively and reproducibly quantify nodule features on CT, nuclear medicine, and magnetic resonance imaging. Some quantitative techniques show great promise in helping to differentiate benign from malignant lesions or to stratify the risk of aggressive versus indolent neoplasm. In this article, we (1) summarize the visual characteristics, descriptors, and signs that may be helpful in management of nodules identified on screening CT, (2) discuss current quantitative and multimodality techniques that aid in the differentiation of nodules, and (3) highlight the power, pitfalls, and limitations of these various techniques.
Machine vision based quality inspection of flat glass products
NASA Astrophysics Data System (ADS)
Zauner, G.; Schagerl, M.
2014-03-01
This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.
On-line object feature extraction for multispectral scene representation
NASA Technical Reports Server (NTRS)
Ghassemian, Hassan; Landgrebe, David
1988-01-01
A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.
Biomorphic networks: approach to invariant feature extraction and segmentation for ATR
NASA Astrophysics Data System (ADS)
Baek, Andrew; Farhat, Nabil H.
1998-10-01
Invariant features in two dimensional binary images are extracted in a single layer network of locally coupled spiking (pulsating) model neurons with prescribed synapto-dendritic response. The feature vector for an image is represented as invariant structure in the aggregate histogram of interspike intervals obtained by computing time intervals between successive spikes produced from each neuron over a given period of time and combining such intervals from all neurons in the network into a histogram. Simulation results show that the feature vectors are more pattern-specific and invariant under translation, rotation, and change in scale or intensity than achieved in earlier work. We also describe an application of such networks to segmentation of line (edge-enhanced or silhouette) images. The biomorphic spiking network's capabilities in segmentation and invariant feature extraction may prove to be, when they are combined, valuable in Automated Target Recognition (ATR) and other automated object recognition systems.
NASA Astrophysics Data System (ADS)
Yang, Wei; Zhang, Su; Li, Wenying; Chen, Yaqing; Lu, Hongtao; Chen, Wufan; Chen, Yazhu
2010-04-01
Various computerized features extracted from breast ultrasound images are useful in assessing the malignancy of breast tumors. However, the underlying relationship between the computerized features and tumor malignancy may not be linear in nature. We use the decision tree ensemble trained by the cost-sensitive boosting algorithm to approximate the target function for malignancy assessment and to reflect this relationship qualitatively. Partial dependence plots are employed to explore and visualize the effect of features on the output of the decision tree ensemble. In the experiments, 31 image features are extracted to quantify the sonographic characteristics of breast tumors. Patient age is used as an external feature because of its high clinical importance. The area under the receiver-operating characteristic curve of the tree ensembles can reach 0.95 with sensitivity of 0.95 (61/64) at the associated specificity 0.74 (77/104). The partial dependence plots of the four most important features are demonstrated to show the influence of the features on malignancy, and they are in accord with the empirical observations. The results can provide visual and qualitative references on the computerized image features for physicians, and can be useful for enhancing the interpretability of computer-aided diagnosis systems for breast ultrasound.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parekh, V; Jacobs, MA
Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI)more » and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRaGe algorithm at automatically discovering useful feature representations directly from the raw multiparametric MRI data. In conclusion, the MIRaGe informatics model provides a powerful tool with applicability in cancer diagnosis and a possibility of extension to other kinds of pathologies. NIH (P50CA103175, 5P30CA006973 (IRAT), R01CA190299, U01CA140204), Siemens Medical Systems (JHU-2012-MR-86-01) and Nivida Graphics Corporation.« less
Hancock, Matthew C.; Magnan, Jerry F.
2016-01-01
Abstract. In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists’ annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (±1.14)%, which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (±0.012), which increases to 0.949 (±0.007) when diameter and volume features are included and has an accuracy of 88.08 (±1.11)%. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification. PMID:27990453
Hancock, Matthew C; Magnan, Jerry F
2016-10-01
In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 [Formula: see text], which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 ([Formula: see text]), which increases to 0.949 ([Formula: see text]) when diameter and volume features are included and has an accuracy of 88.08 [Formula: see text]. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.
The Gemini NICI Planet-Finding Campaign: asymmetries in the HD 141569 disc
NASA Astrophysics Data System (ADS)
Biller, Beth A.; Liu, Michael C.; Rice, Ken; Wahhaj, Zahed; Nielsen, Eric; Hayward, Thomas; Kuchner, Marc J.; Close, Laird M.; Chun, Mark; Ftaclas, Christ; Toomey, Douglas W.
2015-07-01
We report here the highest resolution near-IR imaging to date of the HD 141569A disc taken as part of the NICI (near infrared coronagraphic imager) Science Campaign. We recover four main features in the NICI images of the HD 141569 disc discovered in previous Hubble Space Telescope (HST) imaging: (1) an inner ring/spiral feature. Once deprojected, this feature does not appear circular. (2) An outer ring which is considerably brighter on the western side compared to the eastern side, but looks fairly circular in the deprojected image. (3) An additional arc-like feature between the inner and outer ring only evident on the east side. In the deprojected image, this feature appears to complete the circle of the west side inner ring and (4) an evacuated cavity from 175 au inwards. Compared to the previous HST imaging with relatively large coronagraphic inner working angles (IWA), the NICI coronagraph allows imaging down to an IWA of 0.3 arcsec. Thus, the inner edge of the inner ring/spiral feature is well resolved and we do not find any additional disc structures within 175 au. We note some additional asymmetries in this system. Specifically, while the outer ring structure looks circular in this deprojection, the inner bright ring looks rather elliptical. This suggests that a single deprojection angle is not appropriate for this system and that there may be an offset in inclination between the two ring/spiral features. We find an offset of 4 ± 2 au between the inner ring and the star centre, potentially pointing to unseen inner companions.
NASA Astrophysics Data System (ADS)
Janaki Sathya, D.; Geetha, K.
2017-12-01
Automatic mass or lesion classification systems are developed to aid in distinguishing between malignant and benign lesions present in the breast DCE-MR images, the systems need to improve both the sensitivity and specificity of DCE-MR image interpretation in order to be successful for clinical use. A new classifier (a set of features together with a classification method) based on artificial neural networks trained using artificial fish swarm optimization (AFSO) algorithm is proposed in this paper. The basic idea behind the proposed classifier is to use AFSO algorithm for searching the best combination of synaptic weights for the neural network. An optimal set of features based on the statistical textural features is presented. The investigational outcomes of the proposed suspicious lesion classifier algorithm therefore confirm that the resulting classifier performs better than other such classifiers reported in the literature. Therefore this classifier demonstrates that the improvement in both the sensitivity and specificity are possible through automated image analysis.
Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando
2009-01-01
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134
Image-based overlay measurement using subsurface ultrasonic resonance force microscopy
NASA Astrophysics Data System (ADS)
Tamer, M. S.; van der Lans, M. J.; Sadeghian, H.
2018-03-01
Image Based Overlay (IBO) measurement is one of the most common techniques used in Integrated Circuit (IC) manufacturing to extract the overlay error values. The overlay error is measured using dedicated overlay targets which are optimized to increase the accuracy and the resolution, but these features are much larger than the IC feature size. IBO measurements are realized on the dedicated targets instead of product features, because the current overlay metrology solutions, mainly based on optics, cannot provide sufficient resolution on product features. However, considering the fact that the overlay error tolerance is approaching 2 nm, the overlay error measurement on product features becomes a need for the industry. For sub-nanometer resolution metrology, Scanning Probe Microscopy (SPM) is widely used, though at the cost of very low throughput. The semiconductor industry is interested in non-destructive imaging of buried structures under one or more layers for the application of overlay and wafer alignment, specifically through optically opaque media. Recently an SPM technique has been developed for imaging subsurface features which can be potentially considered as a solution for overlay metrology. In this paper we present the use of Subsurface Ultrasonic Resonance Force Microscopy (SSURFM) used for IBO measurement. We used SSURFM for imaging the most commonly used overlay targets on a silicon substrate and photoresist. As a proof of concept we have imaged surface and subsurface structures simultaneously. The surface and subsurface features of the overlay targets are fabricated with programmed overlay errors of +/-40 nm, +/-20 nm, and 0 nm. The top layer thickness changes between 30 nm and 80 nm. Using SSURFM the surface and subsurface features were successfully imaged and the overlay errors were extracted, via a rudimentary image processing algorithm. The measurement results are in agreement with the nominal values of the programmed overlay errors.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
Automatic Detection of Blue-White Veil and Related Structures in Dermoscopy Images
Celebi, M. Emre; Iyatomi, Hitoshi; Stoecker, William V.; Moss, Randy H.; Rabinovitz, Harold S.; Argenziano, Giuseppe; Soyer, H. Peter
2011-01-01
Dermoscopy is a non-invasive skin imaging technique, which permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. One of the most important features for the diagnosis of melanoma in dermoscopy images is the blue-white veil (irregular, structureless areas of confluent blue pigmentation with an overlying white “ground-glass” film). In this article, we present a machine learning approach to the detection of blue-white veil and related structures in dermoscopy images. The method involves contextual pixel classification using a decision tree classifier. The percentage of blue-white areas detected in a lesion combined with a simple shape descriptor yielded a sensitivity of 69.35% and a specificity of 89.97% on a set of 545 dermoscopy images. The sensitivity rises to 78.20% for detection of blue veil in those cases where it is a primary feature for melanoma recognition. PMID:18804955
Image processing for x-ray inspection of pistachio nuts
NASA Astrophysics Data System (ADS)
Casasent, David P.
2001-03-01
A review is provided of image processing techniques that have been applied to the inspection of pistachio nuts using X-ray images. X-ray sensors provide non-destructive internal product detail not available from other sensors. The primary concern in this data is detecting the presence of worm infestations in nuts, since they have been linked to the presence of aflatoxin. We describe new techniques for segmentation, feature selection, selection of product categories (clusters), classifier design, etc. Specific novel results include: a new segmentation algorithm to produce images of isolated product items; preferable classifier operation (the classifier with the best probability of correct recognition Pc is not best); higher-order discrimination information is present in standard features (thus, high-order features appear useful); classifiers that use new cluster categories of samples achieve improved performance. Results are presented for X-ray images of pistachio nuts; however, all techniques have use in other product inspection applications.
Feature-Based Morphometry: Discovering Group-related Anatomical Patterns
Toews, Matthew; Wells, William; Collins, D. Louis; Arbel, Tal
2015-01-01
This paper presents feature-based morphometry (FBM), a new, fully data-driven technique for discovering patterns of group-related anatomical structure in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between subjects, FBM explicitly aims to identify distinctive anatomical patterns that may only be present in subsets of subjects, due to disease or anatomical variability. The image is modeled as a collage of generic, localized image features that need not be present in all subjects. Scale-space theory is applied to analyze image features at the characteristic scale of underlying anatomical structures, instead of at arbitrary scales such as global or voxel-level. A probabilistic model describes features in terms of their appearance, geometry, and relationship to subject groups, and is automatically learned from a set of subject images and group labels. Features resulting from learning correspond to group-related anatomical structures that can potentially be used as image biomarkers of disease or as a basis for computer-aided diagnosis. The relationship between features and groups is quantified by the likelihood of feature occurrence within a specific group vs. the rest of the population, and feature significance is quantified in terms of the false discovery rate. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer's (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and an equal error classification rate of 0.80 is achieved for subjects aged 60-80 years exhibiting mild AD (CDR=1). PMID:19853047
Chitalia, Rhea; Mueller, Jenna; Fu, Henry L; Whitley, Melodi Javid; Kirsch, David G; Brown, J Quincy; Willett, Rebecca; Ramanujam, Nimmi
2016-09-01
Fluorescence microscopy can be used to acquire real-time images of tissue morphology and with appropriate algorithms can rapidly quantify features associated with disease. The objective of this study was to assess the ability of various segmentation algorithms to isolate fluorescent positive features (FPFs) in heterogeneous images and identify an approach that can be used across multiple fluorescence microscopes with minimal tuning between systems. Specifically, we show a variety of image segmentation algorithms applied to images of stained tumor and muscle tissue acquired with 3 different fluorescence microscopes. Results indicate that a technique called maximally stable extremal regions followed by thresholding (MSER + Binary) yielded the greatest contrast in FPF density between tumor and muscle images across multiple microscopy systems.
Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
Xiang, Lei; Wang, Qian; Nie, Dong; Zhang, Lichi; Jin, Xiyao; Qiao, Yu; Shen, Dinggang
2018-07-01
Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image. Copyright © 2018. Published by Elsevier B.V.
Measurement of glucose concentration by image processing of thin film slides
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Heavner, David
2012-02-01
Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.
Hippocampus shape analysis for temporal lobe epilepsy detection in magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Kohan, Zohreh; Azmi, Reza
2016-03-01
There are evidences in the literature that Temporal Lobe Epilepsy (TLE) causes some lateralized atrophy and deformation on hippocampus and other substructures of the brain. Magnetic Resonance Imaging (MRI), due to high-contrast soft tissue imaging, is one of the most popular imaging modalities being used in TLE diagnosis and treatment procedures. Using an algorithm to help clinicians for better and more effective shape deformations analysis could improve the diagnosis and treatment of the disease. In this project our purpose is to design, implement and test a classification algorithm for MRIs based on hippocampal asymmetry detection using shape and size-based features. Our method consisted of two main parts; (1) shape feature extraction, and (2) image classification. We tested 11 different shape and size features and selected four of them that detect the asymmetry in hippocampus significantly in a randomly selected subset of the dataset. Then, we employed a support vector machine (SVM) classifier to classify the remaining images of the dataset to normal and epileptic images using our selected features. The dataset contains 25 patient images in which 12 cases were used as a training set and the rest 13 cases for testing the performance of classifier. We measured accuracy, specificity and sensitivity of, respectively, 76%, 100%, and 70% for our algorithm. The preliminary results show that using shape and size features for detecting hippocampal asymmetry could be helpful in TLE diagnosis in MRI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knogler, Thomas; El-Rabadi, Karem; Weber, Michael
2014-12-15
Purpose: To determine the diagnostic performance of three-dimensional (3D) texture analysis (TA) of contrast-enhanced computed tomography (CE-CT) images for treatment response assessment in patients with Hodgkin lymphoma (HL), compared with F-18-fludeoxyglucose (FDG) positron emission tomography/CT. Methods: 3D TA of 48 lymph nodes in 29 patients was performed on venous-phase CE-CT images before and after chemotherapy. All lymph nodes showed pathologically elevated FDG uptake at baseline. A stepwise logistic regression with forward selection was performed to identify classic CT parameters and texture features (TF) that enable the separation of complete response (CR) and persistent disease. Results: The TF fraction of imagemore » in runs, calculated for the 45° direction, was able to correctly identify CR with an accuracy of 75%, a sensitivity of 79.3%, and a specificity of 68.4%. Classical CT features achieved an accuracy of 75%, a sensitivity of 86.2%, and a specificity of 57.9%, whereas the combination of TF and CT imaging achieved an accuracy of 83.3%, a sensitivity of 86.2%, and a specificity of 78.9%. Conclusions: 3D TA of CE-CT images is potentially useful to identify nodal residual disease in HL, with a performance comparable to that of classical CT parameters. Best results are achieved when TA and classical CT features are combined.« less
Pseudo CT estimation from MRI using patch-based random forest
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian
2017-02-01
Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.
1995-01-01
One of the challenges of Imaging Spectroscopy is the identification, mapping and abundance determination of materials, whether mineral, vegetable, or liquid, given enough spectral range, spectral resolution, signal to noise, and spatial resolution. Many materials show diagnostic absorption features in the visual and near infrared region (0.4 to 2.5 micrometers) of the spectrum. This region is covered by the modern imaging spectrometers such as AVIRIS. The challenge is to identify the materials from absorption bands in their spectra, and determine what specific analyses must be done to derive particular parameters of interest, ranging from simply identifying its presence to deriving its abundance, or determining specific chemistry of the material. Recently, a new analysis algorithm was developed that uses a digital spectral library of known materials and a fast, modified-least-squares method of determining if a single spectral feature for a given material is present. Clark et al. made another advance in the mapping algorithm: simultaneously mapping multiple minerals using multiple spectral features. This was done by a modified-least-squares fit of spectral features, from data in a digital spectral library, to corresponding spectral features in the image data. This version has now been superseded by a more comprehensive spectral analysis system called Tricorder.
Mid-infrared (5.0-7.0 microns) imaging spectroscopy of the moon from the KAO
NASA Technical Reports Server (NTRS)
Bell, James F., III; Bregman, Jesse D.; Rank, David M.; Temi, Pasquale; Roush, Ted L.; Hawke, B. Ray; Lucey, Paul G.; Pollack, James B.
1995-01-01
A series of 71 mid-infrared images of a small region of the Moon were obtained from the KAO in October, 1993. These images have been assembled into a 5.0 to 7.0 micron image cube that has been calibrated relative to the average spectrum of this region of the Moon at these wavelengths. The data show that clear, detectable spectral differences exist on the Moon in the mid-IR. Some of the spectral differences are correlated with morphologic features such as craters. Specific spectral features near 5.6 and 6.7 microns may be related to the presence of plagioclase or pyroxene.
Laparoscopic optical coherence tomography imaging of human ovarian cancer
Hariri, Lida P.; Bonnema, Garret T.; Schmidt, Kathy; Winkler, Amy M.; Korde, Vrushali; Hatch, Kenneth D.; Davis, John R.; Brewer, Molly A.; Barton, Jennifer K.
2011-01-01
Objectives Ovarian cancer is the fourth leading cause of cancer-related death among women in the US largely due to late detection secondary to unreliable symptomology and screening tools without adequate resolution. Optical coherence tomography (OCT) is a recently emerging imaging modality with promise in ovarian cancer diagnostics, providing non-destructive subsurface imaging at imaging depths up to 2 mm with near-histological grade resolution (10–20 μm). In this study, we developed the first ever laparoscopic OCT (LOCT) device, evaluated the safety and feasibility of LOCT, and characterized the microstructural features of human ovaries in vivo. Methods A custom LOCT device was fabricated specifically for laparoscopic imaging of the ovaries in patients undergoing oophorectomy. OCT images were compared with histopathology to identify preliminary architectural imaging features of normal and pathologic ovarian tissue. Results Thirty ovaries in 17 primarily peri or post-menopausal women were successfully imaged with LOCT: 16 normal, 5 endometriosis, 3 serous cystadenoma, and 4 adenocarcinoma. Preliminary imaging features developed for each category reveal qualitative differences in the homogeneous character of normal post-menopausal ovary, the ability to image small subsurface inclusion cysts, and distinguishable features for endometriosis, cystadenoma, and adenocarcinoma. Conclusions We present the development and successful implementation of the first laparoscopic OCT probe. Comparison of OCT images and corresponding histopathology allowed for the description of preliminary microstructural features for normal ovary, endometriosis, and benign and malignant surface epithelial neoplasms. These results support the potential of OCT both as a diagnostic tool and imaging modality for further evaluation of ovarian cancer pathogenesis. PMID:19481241
NASA Astrophysics Data System (ADS)
Islam, Atiq; Iftekharuddin, Khan M.; Ogg, Robert J.; Laningham, Fred H.; Sivakumar, Bhuvaneswari
2008-03-01
In this paper, we characterize the tumor texture in pediatric brain magnetic resonance images (MRIs) and exploit these features for automatic segmentation of posterior fossa (PF) tumors. We focus on PF tumor because of the prevalence of such tumor in pediatric patients. Due to varying appearance in MRI, we propose to model the tumor texture with a multi-fractal process, such as a multi-fractional Brownian motion (mBm). In mBm, the time-varying Holder exponent provides flexibility in modeling irregular tumor texture. We develop a detailed mathematical framework for mBm in two-dimension and propose a novel algorithm to estimate the multi-fractal structure of tissue texture in brain MRI based on wavelet coefficients. This wavelet based multi-fractal feature along with MR image intensity and a regular fractal feature obtained using our existing piecewise-triangular-prism-surface-area (PTPSA) method, are fused in segmenting PF tumor and non-tumor regions in brain T1, T2, and FLAIR MR images respectively. We also demonstrate a non-patient-specific automated tumor prediction scheme based on these image features. We experimentally show the tumor discriminating power of our novel multi-fractal texture along with intensity and fractal features in automated tumor segmentation and statistical prediction. To evaluate the performance of our tumor prediction scheme, we obtain ROCs and demonstrate how sharply the curves reach the specificity of 1.0 sacrificing minimal sensitivity. Experimental results show the effectiveness of our proposed techniques in automatic detection of PF tumors in pediatric MRIs.
The value of nodal information in predicting lung cancer relapse using 4DPET/4DCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Heyse, E-mail: heyse.li@mail.utoronto.ca; Becker, Nathan; Raman, Srinivas
2015-08-15
Purpose: There is evidence that computed tomography (CT) and positron emission tomography (PET) imaging metrics are prognostic and predictive in nonsmall cell lung cancer (NSCLC) treatment outcomes. However, few studies have explored the use of standardized uptake value (SUV)-based image features of nodal regions as predictive features. The authors investigated and compared the use of tumor and node image features extracted from the radiotherapy target volumes to predict relapse in a cohort of NSCLC patients undergoing chemoradiation treatment. Methods: A prospective cohort of 25 patients with locally advanced NSCLC underwent 4DPET/4DCT imaging for radiation planning. Thirty-seven image features were derivedmore » from the CT-defined volumes and SUVs of the PET image from both the tumor and nodal target regions. The machine learning methods of logistic regression and repeated stratified five-fold cross-validation (CV) were used to predict local and overall relapses in 2 yr. The authors used well-known feature selection methods (Spearman’s rank correlation, recursive feature elimination) within each fold of CV. Classifiers were ranked on their Matthew’s correlation coefficient (MCC) after CV. Area under the curve, sensitivity, and specificity values are also presented. Results: For predicting local relapse, the best classifier found had a mean MCC of 0.07 and was composed of eight tumor features. For predicting overall relapse, the best classifier found had a mean MCC of 0.29 and was composed of a single feature: the volume greater than 0.5 times the maximum SUV (N). Conclusions: The best classifier for predicting local relapse had only tumor features. In contrast, the best classifier for predicting overall relapse included a node feature. Overall, the methods showed that nodes add value in predicting overall relapse but not local relapse.« less
NASA Astrophysics Data System (ADS)
Whitney, Heather M.; Drukker, Karen; Edwards, Alexandra; Papaioannou, John; Giger, Maryellen L.
2018-02-01
Radiomics features extracted from breast lesion images have shown potential in diagnosis and prognosis of breast cancer. As clinical institutions transition from 1.5 T to 3.0 T magnetic resonance imaging (MRI), it is helpful to identify robust features across these field strengths. In this study, dynamic contrast-enhanced MR images were acquired retrospectively under IRB/HIPAA compliance, yielding 738 cases: 241 and 124 benign lesions imaged at 1.5 T and 3.0 T and 231 and 142 luminal A cancers imaged at 1.5 T and 3.0 T, respectively. Lesions were segmented using a fuzzy C-means method. Extracted radiomic values for each group of lesions by cancer status and field strength of acquisition were compared using a Kolmogorov-Smirnov test for the null hypothesis that two groups being compared came from the same distribution, with p-values being corrected for multiple comparisons by the Holm-Bonferroni method. Two shape features, one texture feature, and three enhancement variance kinetics features were found to be potentially robust. All potentially robust features had areas under the receiver operating characteristic curve (AUC) statistically greater than 0.5 in the task of distinguishing between lesion types (range of means 0.57-0.78). The significant difference in voxel size between field strength of acquisition limits the ability to affirm more features as robust or not robust according to field strength alone, and inhomogeneities in static field strength and radiofrequency field could also have affected the assessment of kinetic curve features as robust or not. Vendor-specific image scaling could have also been a factor. These findings will contribute to the development of radiomic signatures that use features identified as robust across field strength.
Brownian motion curve-based textural classification and its application in cancer diagnosis.
Mookiah, Muthu Rama Krishnan; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K
2011-06-01
To develop an automated diagnostic methodology based on textural features of the oral mucosal epithelium to discriminate normal and oral submucous fibrosis (OSF). A total of 83 normal and 29 OSF images from histopathologic sections of the oral mucosa are considered. The proposed diagnostic mechanism consists of two parts: feature extraction using Brownian motion curve (BMC) and design ofa suitable classifier. The discrimination ability of the features has been substantiated by statistical tests. An error back-propagation neural network (BPNN) is used to classify OSF vs. normal. In development of an automated oral cancer diagnostic module, BMC has played an important role in characterizing textural features of the oral images. Fisher's linear discriminant analysis yields 100% sensitivity and 85% specificity, whereas BPNN leads to 92.31% sensitivity and 100% specificity, respectively. In addition to intensity and morphology-based features, textural features are also very important, especially in histopathologic diagnosis of oral cancer. In view of this, a set of textural features are extracted using the BMC for the diagnosis of OSF. Finally, a textural classifier is designed using BPNN, which leads to a diagnostic performance with 96.43% accuracy. (Anal Quant
Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing
2017-12-28
Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system outperforms state-of-the-art plankton image classification systems in terms of accuracy and robustness. This study demonstrated automatic plankton image classification system combining multiple view features using multiple kernel learning. The results indicated that multiple view features combined by NLMKL using three kernel functions (linear, polynomial and Gaussian kernel functions) can describe and use information of features better so that achieve a higher classification accuracy.
Unsupervised universal steganalyzer for high-dimensional steganalytic features
NASA Astrophysics Data System (ADS)
Hou, Xiaodan; Zhang, Tao
2016-11-01
The research in developing steganalytic features has been highly successful. These features are extremely powerful when applied to supervised binary classification problems. However, they are incompatible with unsupervised universal steganalysis because the unsupervised method cannot distinguish embedding distortion from varying levels of noises caused by cover variation. This study attempts to alleviate the problem by introducing similarity retrieval of image statistical properties (SRISP), with the specific aim of mitigating the effect of cover variation on the existing steganalytic features. First, cover images with some statistical properties similar to those of a given test image are searched from a retrieval cover database to establish an aided sample set. Then, unsupervised outlier detection is performed on a test set composed of the given test image and its aided sample set to determine the type (cover or stego) of the given test image. Our proposed framework, called SRISP-aided unsupervised outlier detection, requires no training. Thus, it does not suffer from model mismatch mess. Compared with prior unsupervised outlier detectors that do not consider SRISP, the proposed framework not only retains the universality but also exhibits superior performance when applied to high-dimensional steganalytic features.
Iris Matching Based on Personalized Weight Map.
Dong, Wenbo; Sun, Zhenan; Tan, Tieniu
2011-09-01
Iris recognition typically involves three steps, namely, iris image preprocessing, feature extraction, and feature matching. The first two steps of iris recognition have been well studied, but the last step is less addressed. Each human iris has its unique visual pattern and local image features also vary from region to region, which leads to significant differences in robustness and distinctiveness among the feature codes derived from different iris regions. However, most state-of-the-art iris recognition methods use a uniform matching strategy, where features extracted from different regions of the same person or the same region for different individuals are considered to be equally important. This paper proposes a personalized iris matching strategy using a class-specific weight map learned from the training images of the same iris class. The weight map can be updated online during the iris recognition procedure when the successfully recognized iris images are regarded as the new training data. The weight map reflects the robustness of an encoding algorithm on different iris regions by assigning an appropriate weight to each feature code for iris matching. Such a weight map trained by sufficient iris templates is convergent and robust against various noise. Extensive and comprehensive experiments demonstrate that the proposed personalized iris matching strategy achieves much better iris recognition performance than uniform strategies, especially for poor quality iris images.
Ay, Hakan; Arsava, E Murat; Johnston, S Claiborne; Vangel, Mark; Schwamm, Lee H; Furie, Karen L; Koroshetz, Walter J; Sorensen, A Gregory
2009-01-01
Predictive instruments based on clinical features for early stroke risk after transient ischemic attack suffer from limited specificity. We sought to combine imaging and clinical features to improve predictions for 7-day stroke risk after transient ischemic attack. We studied 601 consecutive patients with transient ischemic attack who had MRI within 24 hours of symptom onset. A logistic regression model was developed using stroke within 7 days as the response criterion and diffusion-weighted imaging findings and dichotomized ABCD(2) score (ABCD(2) >/=4) as covariates. Subsequent stroke occurred in 25 patients (5.2%). Dichotomized ABCD(2) score and acute infarct on diffusion-weighted imaging were each independent predictors of stroke risk. The 7-day risk was 0.0% with no predictor, 2.0% with ABCD(2) score >/=4 alone, 4.9% with acute infarct on diffusion-weighted imaging alone, and 14.9% with both predictors (an automated calculator is available at http://cip.martinos.org). Adding imaging increased the area under the receiver operating characteristic curve from 0.66 (95% CI, 0.57 to 0.76) using the ABCD(2) score to 0.81 (95% CI, 0.74 to 0.88; P=0.003). The sensitivity of 80% on the receiver operating characteristic curve corresponded to a specificity of 73% for the CIP model and 47% for the ABCD(2) score. Combining acute imaging findings with clinical transient ischemic attack features causes a dramatic boost in the accuracy of predictions with clinical features alone for early risk of stroke after transient ischemic attack. If validated in relevant clinical settings, risk stratification by the CIP model may assist in early implementation of therapeutic measures and effective use of hospital resources.
NASA Astrophysics Data System (ADS)
Scott, Richard; Khan, Faisal M.; Zeineh, Jack; Donovan, Michael; Fernandez, Gerardo
2016-03-01
The Gleason score is the most common architectural and morphological assessment of prostate cancer severity and prognosis. There have been numerous quantitative techniques developed to approximate and duplicate the Gleason scoring system. Most of these approaches have been developed in standard H and E brightfield microscopy. Immunofluorescence (IF) image analysis of tissue pathology has recently been proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. In this work we leverage a new method of segmenting gland rings in IF images for predicting the pathological Gleason; both the clinical and the image specific grade, which may not necessarily be the same. We combine these measures with nuclear specific characteristics as assessed by the MST algorithm. Our individual features correlate well univariately with the Gleason grades, and in a multivariate setting have an accuracy of 85% in predicting the Gleason grade. Additionally, these features correlate strongly with clinical progression outcomes (CI of 0.89), significantly outperforming the clinical Gleason grades (CI of 0.78). This work presents the first assessment of morphological gland unit features from IF images for predicting the Gleason grade.
Fire detection system using random forest classification for image sequences of complex background
NASA Astrophysics Data System (ADS)
Kim, Onecue; Kang, Dong-Joong
2013-06-01
We present a fire alarm system based on image processing that detects fire accidents in various environments. To reduce false alarms that frequently appeared in earlier systems, we combined image features including color, motion, and blinking information. We specifically define the color conditions of fires in hue, saturation and value, and RGB color space. Fire features are represented as intensity variation, color mean and variance, motion, and image differences. Moreover, blinking fire features are modeled by using crossing patches. We propose an algorithm that classifies patches into fire or nonfire areas by using random forest supervised learning. We design an embedded surveillance device made with acrylonitrile butadiene styrene housing for stable fire detection in outdoor environments. The experimental results show that our algorithm works robustly in complex environments and is able to detect fires in real time.
Feature selection and classification of multiparametric medical images using bagging and SVM
NASA Astrophysics Data System (ADS)
Fan, Yong; Resnick, Susan M.; Davatzikos, Christos
2008-03-01
This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.
Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua
2014-01-01
Objective To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. Materials and Methods A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Results Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Conclusion Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer. PMID:25250576
Jing, Xiao-Yuan; Zhu, Xiaoke; Wu, Fei; Hu, Ruimin; You, Xinge; Wang, Yunhong; Feng, Hui; Yang, Jing-Yu
2017-03-01
Person re-identification has been widely studied due to its importance in surveillance and forensics applications. In practice, gallery images are high resolution (HR), while probe images are usually low resolution (LR) in the identification scenarios with large variation of illumination, weather, or quality of cameras. Person re-identification in this kind of scenarios, which we call super-resolution (SR) person re-identification, has not been well studied. In this paper, we propose a semi-coupled low-rank discriminant dictionary learning (SLD 2 L) approach for SR person re-identification task. With the HR and LR dictionary pair and mapping matrices learned from the features of HR and LR training images, SLD 2 L can convert the features of the LR probe images into HR features. To ensure that the converted features have favorable discriminative capability and the learned dictionaries can well characterize intrinsic feature spaces of the HR and LR images, we design a discriminant term and a low-rank regularization term for SLD 2 L. Moreover, considering that low resolution results in different degrees of loss for different types of visual appearance features, we propose a multi-view SLD 2 L (MVSLD 2 L) approach, which can learn the type-specific dictionary pair and mappings for each type of feature. Experimental results on multiple publicly available data sets demonstrate the effectiveness of our proposed approaches for the SR person re-identification task.
Noise-gating to Clean Astrophysical Image Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, C. E.
I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less
Improving depth estimation from a plenoptic camera by patterned illumination
NASA Astrophysics Data System (ADS)
Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.
2015-05-01
Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.
Noise-gating to Clean Astrophysical Image Data
NASA Astrophysics Data System (ADS)
DeForest, C. E.
2017-04-01
I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.
PTBS segmentation scheme for synthetic aperture radar
NASA Astrophysics Data System (ADS)
Friedland, Noah S.; Rothwell, Brian J.
1995-07-01
The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Gignac, Lynne M; Mittal, Surbhi; Bangsaruntip, Sarunya; Cohen, Guy M; Sleight, Jeffrey W
2011-12-01
The ability to prepare multiple cross-section transmission electron microscope (XTEM) samples from one XTEM sample of specific sub-10 nm features was demonstrated. Sub-10 nm diameter Si nanowire (NW) devices were initially cross-sectioned using a dual-beam focused ion beam system in a direction running parallel to the device channel. From this XTEM sample, both low- and high-resolution transmission electron microscope (TEM) images were obtained from six separate, specific site Si NW devices. The XTEM sample was then re-sectioned in four separate locations in a direction perpendicular to the device channel: 90° from the original XTEM sample direction. Three of the four XTEM samples were successfully sectioned in the gate region of the device. From these three samples, low- and high-resolution TEM images of the Si NW were taken and measurements of the NW diameters were obtained. This technique demonstrated the ability to obtain high-resolution TEM images in directions 90° from one another of multiple, specific sub-10 nm features that were spaced 1.1 μm apart.
Hand pose estimation in depth image using CNN and random forest
NASA Astrophysics Data System (ADS)
Chen, Xi; Cao, Zhiguo; Xiao, Yang; Fang, Zhiwen
2018-03-01
Thanks to the availability of low cost depth cameras, like Microsoft Kinect, 3D hand pose estimation attracted special research attention in these years. Due to the large variations in hand`s viewpoint and the high dimension of hand motion, 3D hand pose estimation is still challenging. In this paper we propose a two-stage framework which joint with CNN and Random Forest to boost the performance of hand pose estimation. First, we use a standard Convolutional Neural Network (CNN) to regress the hand joints` locations. Second, using a Random Forest to refine the joints from the first stage. In the second stage, we propose a pyramid feature which merges the information flow of the CNN. Specifically, we get the rough joints` location from first stage, then rotate the convolutional feature maps (and image). After this, for each joint, we map its location to each feature map (and image) firstly, then crop features at each feature map (and image) around its location, put extracted features to Random Forest to refine at last. Experimentally, we evaluate our proposed method on ICVL dataset and get the mean error about 11mm, our method is also real-time on a desktop.
Thapa, S S; Lakhey, R B; Sharma, P; Pokhrel, R K
2016-05-01
Magnetic resonance imaging is routinely done for diagnosis of lumbar disc prolapse. Many abnormalities of disc are observed even in asymptomatic patient.This study was conducted tocorrelate these abnormalities observed on Magnetic resonance imaging and clinical features of lumbar disc prolapse. A This prospective analytical study includes 57 cases of lumbar disc prolapse presenting to Department of Orthopedics, Tribhuvan University Teaching Hospital from March 2011 to August 2012. All patientshad Magnetic resonance imaging of lumbar spine and the findings regarding type, level and position of lumbar disc prolapse, any neural canal or foraminal compromise was recorded. These imaging findings were then correlated with clinical signs and symptoms. Chi-square test was used to find out p-value for correlation between clinical features and Magnetic resonance imaging findings using SPSS 17.0. This study included 57 patients, with mean age 36.8 years. Of them 41(71.9%) patients had radicular leg pain along specific dermatome. Magnetic resonance imaging showed 104 lumbar disc prolapselevel. Disc prolapse at L4-L5 and L5-S1 level constituted 85.5%.Magnetic resonance imaging findings of neural foramina compromise and nerve root compression were fairly correlated withclinical findings of radicular pain and neurological deficit. Clinical features and Magnetic resonance imaging findings of lumbar discprolasehad faircorrelation, but all imaging abnormalities do not have a clinical significance.
NASA Astrophysics Data System (ADS)
Huang, Xin; Chen, Huijun; Gong, Jianya
2018-01-01
Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).
Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology
NASA Technical Reports Server (NTRS)
Larson, Stephen M.; Slaughter, Charles D.
1992-01-01
Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.
Deep Learning in Medical Image Analysis.
Shen, Dinggang; Wu, Guorong; Suk, Heung-Il
2017-06-21
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain. PMID:27057543
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.
Zhu, Jianwei; Zhang, Haicang; Li, Shuai Cheng; Wang, Chao; Kong, Lupeng; Sun, Shiwei; Zheng, Wei-Mou; Bu, Dongbo
2017-12-01
Accurate recognition of protein fold types is a key step for template-based prediction of protein structures. The existing approaches to fold recognition mainly exploit the features derived from alignments of query protein against templates. These approaches have been shown to be successful for fold recognition at family level, but usually failed at superfamily/fold levels. To overcome this limitation, one of the key points is to explore more structurally informative features of proteins. Although residue-residue contacts carry abundant structural information, how to thoroughly exploit these information for fold recognition still remains a challenge. In this study, we present an approach (called DeepFR) to improve fold recognition at superfamily/fold levels. The basic idea of our approach is to extract fold-specific features from predicted residue-residue contacts of proteins using deep convolutional neural network (DCNN) technique. Based on these fold-specific features, we calculated similarity between query protein and templates, and then assigned query protein with fold type of the most similar template. DCNN has showed excellent performance in image feature extraction and image recognition; the rational underlying the application of DCNN for fold recognition is that contact likelihood maps are essentially analogy to images, as they both display compositional hierarchy. Experimental results on the LINDAHL dataset suggest that even using the extracted fold-specific features alone, our approach achieved success rate comparable to the state-of-the-art approaches. When further combining these features with traditional alignment-related features, the success rate of our approach increased to 92.3%, 82.5% and 78.8% at family, superfamily and fold levels, respectively, which is about 18% higher than the state-of-the-art approach at fold level, 6% higher at superfamily level and 1% higher at family level. An independent assessment on SCOP_TEST dataset showed consistent performance improvement, indicating robustness of our approach. Furthermore, bi-clustering results of the extracted features are compatible with fold hierarchy of proteins, implying that these features are fold-specific. Together, these results suggest that the features extracted from predicted contacts are orthogonal to alignment-related features, and the combination of them could greatly facilitate fold recognition at superfamily/fold levels and template-based prediction of protein structures. Source code of DeepFR is freely available through https://github.com/zhujianwei31415/deepfr, and a web server is available through http://protein.ict.ac.cn/deepfr. zheng@itp.ac.cn or dbu@ict.ac.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Representation learning: a unified deep learning framework for automatic prostate MR segmentation.
Liao, Shu; Gao, Yaozong; Oto, Aytekin; Shen, Dinggang
2013-01-01
Image representation plays an important role in medical image analysis. The key to the success of different medical image analysis algorithms is heavily dependent on how we represent the input data, namely features used to characterize the input image. In the literature, feature engineering remains as an active research topic, and many novel hand-crafted features are designed such as Haar wavelet, histogram of oriented gradient, and local binary patterns. However, such features are not designed with the guidance of the underlying dataset at hand. To this end, we argue that the most effective features should be designed in a learning based manner, namely representation learning, which can be adapted to different patient datasets at hand. In this paper, we introduce a deep learning framework to achieve this goal. Specifically, a stacked independent subspace analysis (ISA) network is adopted to learn the most effective features in a hierarchical and unsupervised manner. The learnt features are adapted to the dataset at hand and encode high level semantic anatomical information. The proposed method is evaluated on the application of automatic prostate MR segmentation. Experimental results show that significant segmentation accuracy improvement can be achieved by the proposed deep learning method compared to other state-of-the-art segmentation approaches.
Hu, Shan; Xu, Chao; Guan, Weiqiao; Tang, Yong; Liu, Yana
2014-01-01
Osteosarcoma is the most common malignant bone tumor among children and adolescents. In this study, image texture analysis was made to extract texture features from bone CR images to evaluate the recognition rate of osteosarcoma. To obtain the optimal set of features, Sym4 and Db4 wavelet transforms and gray-level co-occurrence matrices were applied to the image, with statistical methods being used to maximize the feature selection. To evaluate the performance of these methods, a support vector machine algorithm was used. The experimental results demonstrated that the Sym4 wavelet had a higher classification accuracy (93.44%) than the Db4 wavelet with respect to osteosarcoma occurrence in the epiphysis, whereas the Db4 wavelet had a higher classification accuracy (96.25%) for osteosarcoma occurrence in the diaphysis. Results including accuracy, sensitivity, specificity and ROC curves obtained using the wavelets were all higher than those obtained using the features derived from the GLCM method. It is concluded that, a set of texture features can be extracted from the wavelets and used in computer-aided osteosarcoma diagnosis systems. In addition, this study also confirms that multi-resolution analysis is a useful tool for texture feature extraction during bone CR image processing.
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
Retinal status analysis method based on feature extraction and quantitative grading in OCT images.
Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri
2016-07-22
Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.
Atypical β-Catenin Activated Child Hepatocellular Tumor
Unlu, Havva Akmaz; Karakus, Esra; Yazal Erdem, Arzu; Yakut, Zeynep Ilerisoy
2015-01-01
Hepatocellular adenomas are a benign, focal, hepatic neoplasm that have been divided into four subtypes according to the genetic and pathological features. The β-catenin activated subtype accounts for 10-15% of all hepatocellular adenomas and specific magnetic resonance imaging features have been defined for different hepatocellular adenomas subtypes. The current study aimed to report the magnetic resonance imaging features of a well differentiated hepatocellular carcinoma that developed on the basis of β-catenin activated hepatocellular adenomas in a child. In this case, atypical diffuse steatosis was determined in the lesion. In the literature, diffuse steatosis, which is defined as a feature of the hepatocyte nuclear factor-1α-inactivated hepatocellular adenomas subtype, has not been previously reported in any β-catenin activated hepatocellular adenomas case. Interlacing magnetic resonance imaging findings between subtypes show that there are still many mysteries about this topic and larger studies are warranted. PMID:26157702
Towards Dynamic Contrast Specific Ultrasound Tomography
NASA Astrophysics Data System (ADS)
Demi, Libertario; van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo
2016-10-01
We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast.
Towards Dynamic Contrast Specific Ultrasound Tomography.
Demi, Libertario; Van Sloun, Ruud J G; Wijkstra, Hessel; Mischi, Massimo
2016-10-05
We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast.
Towards Dynamic Contrast Specific Ultrasound Tomography
Demi, Libertario; Van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo
2016-01-01
We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast. PMID:27703251
NASA Astrophysics Data System (ADS)
Mazurowski, Maciej A.; Clark, Kal; Czarnek, Nicholas M.; Shamsesfandabadi, Parisa; Peters, Katherine B.; Saha, Ashirbani
2017-03-01
Recent studies showed that genomic analysis of lower grade gliomas can be very effective for stratification of patients into groups with different prognosis and proposed specific genomic classifications. In this study, we explore the association of one of those genomic classifications with imaging parameters to determine whether imaging could serve a similar role to genomics in cancer patient treatment. Specifically, we analyzed imaging and genomics data for 110 patients from 5 institutions from The Cancer Genome Atlas and The Cancer Imaging Archive datasets. The analyzed imaging data contained preoperative FLAIR sequence for each patient. The images were analyzed using the in-house algorithms which quantify 2D and 3D aspects of the tumor shape. Genomic data consisted of a cluster of clusters classification proposed in a very recent and leading publication in the field of lower grade glioma genomics. Our statistical analysis showed that there is a strong association between the tumor cluster-of-clusters subtype and two imaging features: bounding ellipsoid volume ratio and angular standard deviation. This result shows high promise for the potential use of imaging as a surrogate measure for genomics in the decision process regarding treatment of lower grade glioma patients.
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi, Anant; Lee, George
2016-10-01
With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of "big data". It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular "omics" features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology. Copyright © 2016 Elsevier B.V. All rights reserved.
Tissues segmentation based on multi spectral medical images
NASA Astrophysics Data System (ADS)
Li, Ya; Wang, Ying
2017-11-01
Each band image contains the most obvious tissue feature according to the optical characteristics of different tissues in different specific bands for multispectral medical images. In this paper, the tissues were segmented by their spectral information at each multispectral medical images. Four Local Binary Patter descriptors were constructed to extract blood vessels based on the gray difference between the blood vessels and their neighbors. The segmented tissue in each band image was merged to a clear image.
Fine-tuning convolutional deep features for MRI based brain tumor classification
NASA Astrophysics Data System (ADS)
Ahmed, Kaoutar B.; Hall, Lawrence O.; Goldgof, Dmitry B.; Liu, Renhao; Gatenby, Robert A.
2017-03-01
Prediction of survival time from brain tumor magnetic resonance images (MRI) is not commonly performed and would ordinarily be a time consuming process. However, current cross-sectional imaging techniques, particularly MRI, can be used to generate many features that may provide information on the patient's prognosis, including survival. This information can potentially be used to identify individuals who would benefit from more aggressive therapy. Rather than using pre-defined and hand-engineered features as with current radiomics methods, we investigated the use of deep features extracted from pre-trained convolutional neural networks (CNNs) in predicting survival time. We also provide evidence for the power of domain specific fine-tuning in improving the performance of a pre-trained CNN's, even though our data set is small. We fine-tuned a CNN initially trained on a large natural image recognition dataset (Imagenet ILSVRC) and transferred the learned feature representations to the survival time prediction task, obtaining over 81% accuracy in a leave one out cross validation.
New method for identifying features of an image on a digital video display
NASA Astrophysics Data System (ADS)
Doyle, Michael D.
1991-04-01
The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4
Hepatocellular carcinoma: Advances in diagnostic imaging.
Sun, Haoran; Song, Tianqiang
2015-10-01
Thanks to the growing knowledge on biological behaviors of hepatocellular carcinomas (HCC), as well as continuous improvement in imaging techniques and experienced interpretation of imaging features of the nodules in cirrhotic liver, the detection and characterization of HCC has improved in the past decade. A number of practice guidelines for imaging diagnosis have been developed to reduce interpretation variability and standardize management of HCC, and they are constantly updated with advances in imaging techniques and evidence based data from clinical series. In this article, we strive to review the imaging techniques and the characteristic features of hepatocellular carcinoma associated with cirrhotic liver, with emphasis on the diagnostic value of advanced magnetic resonance imaging (MRI) techniques and utilization of hepatocyte-specific MRI contrast agents. We also briefly describe the concept of liver imaging reporting and data systems and discuss the consensus and controversy of major practice guidelines.
Multiresolution texture models for brain tumor segmentation in MRI.
Iftekharuddin, Khan M; Ahmed, Shaheen; Hossen, Jakir
2011-01-01
In this study we discuss different types of texture features such as Fractal Dimension (FD) and Multifractional Brownian Motion (mBm) for estimating random structures and varying appearance of brain tissues and tumors in magnetic resonance images (MRI). We use different selection techniques including KullBack - Leibler Divergence (KLD) for ranking different texture and intensity features. We then exploit graph cut, self organizing maps (SOM) and expectation maximization (EM) techniques to fuse selected features for brain tumors segmentation in multimodality T1, T2, and FLAIR MRI. We use different similarity metrics to evaluate quality and robustness of these selected features for tumor segmentation in MRI for real pediatric patients. We also demonstrate a non-patient-specific automated tumor prediction scheme by using improved AdaBoost classification based on these image features.
Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A
2015-07-01
Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Facebook photo activity associated with body image disturbance in adolescent girls.
Meier, Evelyn P; Gray, James
2014-04-01
The present study examined the relationship between body image and adolescent girls' activity on the social networking site (SNS) Facebook (FB). Research has shown that elevated Internet "appearance exposure" is positively correlated with increased body image disturbance among adolescent girls, and there is a particularly strong association with FB use. The present study sought to replicate and extend upon these findings by identifying the specific FB features that correlate with body image disturbance in adolescent girls. A total of 103 middle and high school females completed questionnaire measures of total FB use, specific FB feature use, weight dissatisfaction, drive for thinness, thin ideal internalization, appearance comparison, and self-objectification. An appearance exposure score was calculated based on subjects' use of FB photo applications relative to total FB use. Elevated appearance exposure, but not overall FB usage, was significantly correlated with weight dissatisfaction, drive for thinness, thin ideal internalization, and self-objectification. Implications for eating disorder prevention programs and best practices in researching SNSs are discussed.
Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Shin, Eun Seok; Kim, Sung Min
2018-01-01
The purpose of this study was to propose a hybrid ensemble classifier to characterize coronary plaque regions in intravascular ultrasound (IVUS) images. Pixels were allocated to one of four tissues (fibrous tissue (FT), fibro-fatty tissue (FFT), necrotic core (NC), and dense calcium (DC)) through processes of border segmentation, feature extraction, feature selection, and classification. Grayscale IVUS images and their corresponding virtual histology images were acquired from 11 patients with known or suspected coronary artery disease using 20 MHz catheter. A total of 102 hybrid textural features including first order statistics (FOS), gray level co-occurrence matrix (GLCM), extended gray level run-length matrix (GLRLM), Laws, local binary pattern (LBP), intensity, and discrete wavelet features (DWF) were extracted from IVUS images. To select optimal feature sets, genetic algorithm was implemented. A hybrid ensemble classifier based on histogram and texture information was then used for plaque characterization in this study. The optimal feature set was used as input of this ensemble classifier. After tissue characterization, parameters including sensitivity, specificity, and accuracy were calculated to validate the proposed approach. A ten-fold cross validation approach was used to determine the statistical significance of the proposed method. Our experimental results showed that the proposed method had reliable performance for tissue characterization in IVUS images. The hybrid ensemble classification method outperformed other existing methods by achieving characterization accuracy of 81% for FFT and 75% for NC. In addition, this study showed that Laws features (SSV and SAV) were key indicators for coronary tissue characterization. The proposed method had high clinical applicability for image-based tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Leighs, J. A.; Halling-Brown, M. D.; Patel, M. N.
2018-03-01
The UK currently has a national breast cancer-screening program and images are routinely collected from a number of screening sites, representing a wealth of invaluable data that is currently under-used. Radiologists evaluate screening images manually and recall suspicious cases for further analysis such as biopsy. Histological testing of biopsy samples confirms the malignancy of the tumour, along with other diagnostic and prognostic characteristics such as disease grade. Machine learning is becoming increasingly popular for clinical image classification problems, as it is capable of discovering patterns in data otherwise invisible. This is particularly true when applied to medical imaging features; however clinical datasets are often relatively small. A texture feature extraction toolkit has been developed to mine a wide range of features from medical images such as mammograms. This study analysed a dataset of 1,366 radiologist-marked, biopsy-proven malignant lesions obtained from the OPTIMAM Medical Image Database (OMI-DB). Exploratory data analysis methods were employed to better understand extracted features. Machine learning techniques including Classification and Regression Trees (CART), ensemble methods (e.g. random forests), and logistic regression were applied to the data to predict the disease grade of the analysed lesions. Prediction scores of up to 83% were achieved; sensitivity and specificity of the models trained have been discussed to put the results into a clinical context. The results show promise in the ability to predict prognostic indicators from the texture features extracted and thus enable prioritisation of care for patients at greatest risk.
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Rostang, Johan; Avanaki, Ali; Espig, Kathryn; Xthona, Albert; Cocuranu, Ioan; Parwani, Anil V.; Pantanowitz, Liron
2014-03-01
Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.
NASA Astrophysics Data System (ADS)
Ahmed, H. M.; Al-azawi, R. J.; Abdulhameed, A. A.
2018-05-01
Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Saliency Detection for Stereoscopic 3D Images in the Quaternion Frequency Domain
NASA Astrophysics Data System (ADS)
Cai, Xingyu; Zhou, Wujie; Cen, Gang; Qiu, Weiwei
2018-06-01
Recent studies have shown that a remarkable distinction exists between human binocular and monocular viewing behaviors. Compared with two-dimensional (2D) saliency detection models, stereoscopic three-dimensional (S3D) image saliency detection is a more challenging task. In this paper, we propose a saliency detection model for S3D images. The final saliency map of this model is constructed from the local quaternion Fourier transform (QFT) sparse feature and global QFT log-Gabor feature. More specifically, the local QFT feature measures the saliency map of an S3D image by analyzing the location of a similar patch. The similar patch is chosen using a sparse representation method. The global saliency map is generated by applying the wake edge-enhanced gradient QFT map through a band-pass filter. The results of experiments on two public datasets show that the proposed model outperforms existing computational saliency models for estimating S3D image saliency.
NASA Astrophysics Data System (ADS)
Mobasheri, Mohammad Reza; Ghamary-Asl, Mohsen
2011-12-01
Imaging through hyperspectral technology is a powerful tool that can be used to spectrally identify and spatially map materials based on their specific absorption characteristics in electromagnetic spectrum. A robust method called Tetracorder has shown its effectiveness at material identification and mapping, using a set of algorithms within an expert system decision-making framework. In this study, using some stages of Tetracorder, a technique called classification by diagnosing all absorption features (CDAF) is introduced. This technique enables one to assign a class to the most abundant mineral in each pixel with high accuracy. The technique is based on the derivation of information from reflectance spectra of the image. This can be done through extraction of spectral absorption features of any minerals from their respected laboratory-measured reflectance spectra, and comparing it with those extracted from the pixels in the image. The CDAF technique has been executed on the AVIRIS image where the results show an overall accuracy of better than 96%.
Metric Learning for Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
NASA Astrophysics Data System (ADS)
Singla, Neeru; Srivastava, Vishal; Singh Mehta, Dalip
2018-02-01
We report the first fully automated detection of human skin burn injuries in vivo, with the goal of automatic surgical margin assessment based on optical coherence tomography (OCT) images. Our proposed automated procedure entails building a machine-learning-based classifier by extracting quantitative features from normal and burn tissue images recorded by OCT. In this study, 56 samples (28 normal, 28 burned) were imaged by OCT and eight features were extracted. A linear model classifier was trained using 34 samples and 22 samples were used to test the model. Sensitivity of 91.6% and specificity of 90% were obtained. Our results demonstrate the capability of a computer-aided technique for accurately and automatically identifying burn tissue resection margins during surgical treatment.
Pareek, Gyan; Acharya, U Rajendra; Sree, S Vinitha; Swapna, G; Yantri, Ratna; Martis, Roshan Joy; Saba, Luca; Krishnamurthi, Ganapathy; Mallarini, Giorgio; El-Baz, Ayman; Al Ekish, Shadi; Beland, Michael; Suri, Jasjit S
2013-12-01
In this work, we have proposed an on-line computer-aided diagnostic system called "UroImage" that classifies a Transrectal Ultrasound (TRUS) image into cancerous or non-cancerous with the help of non-linear Higher Order Spectra (HOS) features and Discrete Wavelet Transform (DWT) coefficients. The UroImage system consists of an on-line system where five significant features (one DWT-based feature and four HOS-based features) are extracted from the test image. These on-line features are transformed by the classifier parameters obtained using the training dataset to determine the class. We trained and tested six classifiers. The dataset used for evaluation had 144 TRUS images which were split into training and testing sets. Three-fold and ten-fold cross-validation protocols were adopted for training and estimating the accuracy of the classifiers. The ground truth used for training was obtained using the biopsy results. Among the six classifiers, using 10-fold cross-validation technique, Support Vector Machine and Fuzzy Sugeno classifiers presented the best classification accuracy of 97.9% with equally high values for sensitivity, specificity and positive predictive value. Our proposed automated system, which achieved more than 95% values for all the performance measures, can be an adjunct tool to provide an initial diagnosis for the identification of patients with prostate cancer. The technique, however, is limited by the limitations of 2D ultrasound guided biopsy, and we intend to improve our technique by using 3D TRUS images in the future.
A novel methodology for querying web images
NASA Astrophysics Data System (ADS)
Prabhakara, Rashmi; Lee, Ching Cheng
2005-01-01
Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.
A novel methodology for querying web images
NASA Astrophysics Data System (ADS)
Prabhakara, Rashmi; Lee, Ching Cheng
2004-12-01
Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.
Multi-Contrast Multi-Atlas Parcellation of Diffusion Tensor Imaging of the Human Brain
Tang, Xiaoying; Yoshida, Shoko; Hsu, John; Huisman, Thierry A. G. M.; Faria, Andreia V.; Oishi, Kenichi; Kutten, Kwame; Poretti, Andrea; Li, Yue; Miller, Michael I.; Mori, Susumu
2014-01-01
In this paper, we propose a novel method for parcellating the human brain into 193 anatomical structures based on diffusion tensor images (DTIs). This was accomplished in the setting of multi-contrast diffeomorphic likelihood fusion using multiple DTI atlases. DTI images are modeled as high dimensional fields, with each voxel exhibiting a vector valued feature comprising of mean diffusivity (MD), fractional anisotropy (FA), and fiber angle. For each structure, the probability distribution of each element in the feature vector is modeled as a mixture of Gaussians, the parameters of which are estimated from the labeled atlases. The structure-specific feature vector is then used to parcellate the test image. For each atlas, a likelihood is iteratively computed based on the structure-specific vector feature. The likelihoods from multiple atlases are then fused. The updating and fusing of the likelihoods is achieved based on the expectation-maximization (EM) algorithm for maximum a posteriori (MAP) estimation problems. We first demonstrate the performance of the algorithm by examining the parcellation accuracy of 18 structures from 25 subjects with a varying degree of structural abnormality. Dice values ranging 0.8–0.9 were obtained. In addition, strong correlation was found between the volume size of the automated and the manual parcellation. Then, we present scan-rescan reproducibility based on another dataset of 16 DTI images – an average of 3.73%, 1.91%, and 1.79% for volume, mean FA, and mean MD respectively. Finally, the range of anatomical variability in the normal population was quantified for each structure. PMID:24809486
NASA Astrophysics Data System (ADS)
Sebatubun, M. M.; Haryawan, C.; Windarta, B.
2018-03-01
Lung cancer causes a high mortality rate in the world than any other cancers. That can be minimised if the symptoms and cancer cells have been detected early. One of the techniques used to detect lung cancer is by computed tomography (CT) scan. CT scan images have been used in this study to identify one of the lesion characteristics named ground glass opacity (GGO). It has been used to determine the level of malignancy of the lesion. There were three phases in identifying GGO: image cropping, feature extraction using grey level co-occurrence matrices (GLCM) and classification using Naïve Bayes Classifier. In order to improve the classification results, the most significant feature was sought by feature selection using gain ratio evaluation. Based on the results obtained, the most significant features could be identified by using feature selection method used in this research. The accuracy rate increased from 83.33% to 91.67%, the sensitivity from 82.35% to 94.11% and the specificity from 84.21% to 89.47%.
Liu, Tongtong; Ge, Xifeng; Yu, Jinhua; Guo, Yi; Wang, Yuanyuan; Wang, Wenping; Cui, Ligang
2018-06-21
B-mode ultrasound (B-US) and strain elastography ultrasound (SE-US) images have a potential to distinguish thyroid tumor with different lymph node (LN) status. The purpose of our study is to investigate whether the application of multi-modality images including B-US and SE-US can improve the discriminability of thyroid tumor with LN metastasis based on a radiomics approach. Ultrasound (US) images including B-US and SE-US images of 75 papillary thyroid carcinoma (PTC) cases were retrospectively collected. A radiomics approach was developed in this study to estimate LNs status of PTC patients. The approach included image segmentation, quantitative feature extraction, feature selection and classification. Three feature sets were extracted from B-US, SE-US, and multi-modality containing B-US and SE-US. They were used to evaluate the contribution of different modalities. A total of 684 radiomics features have been extracted in our study. We used sparse representation coefficient-based feature selection method with 10-bootstrap to reduce the dimension of feature sets. Support vector machine with leave-one-out cross-validation was used to build the model for estimating LN status. Using features extracted from both B-US and SE-US, the radiomics-based model produced an area under the receiver operating characteristic curve (AUC) [Formula: see text] 0.90, accuracy (ACC) [Formula: see text] 0.85, sensitivity (SENS) [Formula: see text] 0.77 and specificity (SPEC) [Formula: see text] 0.88, which was better than using features extracted from B-US or SE-US separately. Multi-modality images provided more information in radiomics study. Combining use of B-US and SE-US could improve the LN metastasis estimation accuracy for PTC patients.
NASA Astrophysics Data System (ADS)
Leo, Patrick; Lee, George; Madabhushi, Anant
2016-03-01
Quantitative histomorphometry (QH) is the process of computerized extraction of features from digitized tissue slide images. Typically these features are used in machine learning classifiers to predict disease presence, behavior and outcome. Successful robust classifiers require features that both discriminate between classes of interest and are stable across data from multiple sites. Feature stability may be compromised by variation in slide staining and scanning procedures. These laboratory specific variables include dye batch, slice thickness and the whole slide scanner used to digitize the slide. The key therefore is to be able to identify features that are not only discriminating between the classes of interest (e.g. cancer and non-cancer or biochemical recurrence and non- recurrence) but also features that will not wildly fluctuate on slides representing the same tissue class but from across multiple different labs and sites. While there has been some recent efforts at understanding feature stability in the context of radiomics applications (i.e. feature analysis of radiographic images), relatively few attempts have been made at studying the trade-off between feature stability and discriminability for histomorphometric and digital pathology applications. In this paper we present two new measures, preparation-induced instability score (PI) and latent instability score (LI), to quantify feature instability across and within datasets. Dividing PI by LI yields a ratio for how often a feature for a specific tissue class (e.g. low grade prostate cancer) is different between datasets from different sites versus what would be expected from random chance alone. Using this ratio we seek to quantify feature vulnerability to variations in slide preparation and digitization. Since our goal is to identify stable QH features we evaluate these features for their stability and thus inclusion in machine learning based classifiers in a use case involving prostate cancer. Specifically we examine QH features which may predict 5 year biochemical recurrence for prostate cancer patients who have undergone radical prostatectomy from digital slide images of surgically excised tissue specimens, 5 year biochemical recurrence being a strong predictor of disease recurrence. In this study we evaluated the ability of our feature robustness indices to identify the most stable and predictive features of 5 year biochemical recurrence using digitized slide images of surgically excised prostate cancer specimens from 80 different patients across 4 different sites. A total of 242 features from 5 different feature families were investigated to identify the most stable QH features from our set. Our feature robustness indices (PI and LI) suggested that five feature families (graph, shape, co-occurring gland tensors, gland sub-graphs, texture) were susceptible to variations in slide preparation and digitization across various sites. The family least affected was shape features in which 19.3% of features varied across laboratories while the most vulnerable family, at 55.6%, was the gland disorder features. However the disorder features were the most stable within datasets being different between random halves of a dataset in an average of just 4.1% of comparisons while texture features were the most unstable being different at a rate of 4.7%. We also compared feature stability across two datasets before and after color normalization. Color normalization decreased feature stability with 8% and 34% of features different between the two datasets in two outcome groups prior to normalization and 49% and 51% different afterwards. Our results appear to suggest that evaluation of QH features across multiple sites needs to be undertaken to assess robustness and class discriminability alone should not represent the benchmark for selection of QH features to build diagnostic and prognostic digital pathology classifiers.
Lemasson, Alban; Nagumo, Sumiharu; Masataka, Nobuo
2012-01-01
Despite not knowing the exact age of individuals, humans can estimate their rough age using age-related physical features. Nonhuman primates show some age-related physical features; however, the cognitive traits underlying their recognition of age class have not been revealed. Here, we tested the ability of two species of Old World monkey, Japanese macaques (JM) and Campbell's monkeys (CM), to spontaneously discriminate age classes using visual paired comparison (VPC) tasks based on the two distinct categories of infant and adult images. First, VPCs were conducted in JM subjects using conspecific JM stimuli. When analyzing the side of the first look, JM subjects significantly looked more often at novel images. Based on analyses of total looking durations, JM subjects looked at a novel infant image longer than they looked at a familiar adult image, suggesting the ability to spontaneously discriminate between the two age classes and a preference for infant over adult images. Next, VPCs were tested in CM subjects using heterospecific JM stimuli. CM subjects showed no difference in the side of their first look, but looked at infant JM images longer than they looked at adult images; the fact that CMs were totally naïve to JMs suggested that the attractiveness of infant images transcends species differences. This is the first report of visual age class recognition and a preference for infant over adult images in nonhuman primates. Our results suggest not only species-specific processing for age class recognition but also the evolutionary origins of the instinctive human perception of baby cuteness schema, proposed by the ethologist Konrad Lorenz. PMID:22685529
DSouza, Alisha V.; Lin, Huiyun; Henderson, Eric R.; Samkoe, Kimberley S.; Pogue, Brian W.
2016-01-01
Abstract. There is growing interest in using fluorescence imaging instruments to guide surgery, and the leading options for open-field imaging are reviewed here. While the clinical fluorescence-guided surgery (FGS) field has been focused predominantly on indocyanine green (ICG) imaging, there is accelerated development of more specific molecular tracers. These agents should help advance new indications for which FGS presents a paradigm shift in how molecular information is provided for resection decisions. There has been a steady growth in commercially marketed FGS systems, each with their own differentiated performance characteristics and specifications. A set of desirable criteria is presented to guide the evaluation of instruments, including: (i) real-time overlay of white-light and fluorescence images, (ii) operation within ambient room lighting, (iii) nanomolar-level sensitivity, (iv) quantitative capabilities, (v) simultaneous multiple fluorophore imaging, and (vi) ergonomic utility for open surgery. In this review, United States Food and Drug Administration 510(k) cleared commercial systems and some leading premarket FGS research systems were evaluated to illustrate the continual increase in this performance feature base. Generally, the systems designed for ICG-only imaging have sufficient sensitivity to ICG, but a fraction of the other desired features listed above, with both lower sensitivity and dynamic range. In comparison, the emerging research systems targeted for use with molecular agents have unique capabilities that will be essential for successful clinical imaging studies with low-concentration agents or where superior rejection of ambient light is needed. There is no perfect imaging system, but the feature differences among them are important differentiators in their utility, as outlined in the data and tables here. PMID:27533438
Sato, Anna; Koda, Hiroki; Lemasson, Alban; Nagumo, Sumiharu; Masataka, Nobuo
2012-01-01
Despite not knowing the exact age of individuals, humans can estimate their rough age using age-related physical features. Nonhuman primates show some age-related physical features; however, the cognitive traits underlying their recognition of age class have not been revealed. Here, we tested the ability of two species of Old World monkey, Japanese macaques (JM) and Campbell's monkeys (CM), to spontaneously discriminate age classes using visual paired comparison (VPC) tasks based on the two distinct categories of infant and adult images. First, VPCs were conducted in JM subjects using conspecific JM stimuli. When analyzing the side of the first look, JM subjects significantly looked more often at novel images. Based on analyses of total looking durations, JM subjects looked at a novel infant image longer than they looked at a familiar adult image, suggesting the ability to spontaneously discriminate between the two age classes and a preference for infant over adult images. Next, VPCs were tested in CM subjects using heterospecific JM stimuli. CM subjects showed no difference in the side of their first look, but looked at infant JM images longer than they looked at adult images; the fact that CMs were totally naïve to JMs suggested that the attractiveness of infant images transcends species differences. This is the first report of visual age class recognition and a preference for infant over adult images in nonhuman primates. Our results suggest not only species-specific processing for age class recognition but also the evolutionary origins of the instinctive human perception of baby cuteness schema, proposed by the ethologist Konrad Lorenz.
NASA Astrophysics Data System (ADS)
DSouza, Alisha V.; Lin, Huiyun; Henderson, Eric R.; Samkoe, Kimberley S.; Pogue, Brian W.
2016-08-01
There is growing interest in using fluorescence imaging instruments to guide surgery, and the leading options for open-field imaging are reviewed here. While the clinical fluorescence-guided surgery (FGS) field has been focused predominantly on indocyanine green (ICG) imaging, there is accelerated development of more specific molecular tracers. These agents should help advance new indications for which FGS presents a paradigm shift in how molecular information is provided for resection decisions. There has been a steady growth in commercially marketed FGS systems, each with their own differentiated performance characteristics and specifications. A set of desirable criteria is presented to guide the evaluation of instruments, including: (i) real-time overlay of white-light and fluorescence images, (ii) operation within ambient room lighting, (iii) nanomolar-level sensitivity, (iv) quantitative capabilities, (v) simultaneous multiple fluorophore imaging, and (vi) ergonomic utility for open surgery. In this review, United States Food and Drug Administration 510(k) cleared commercial systems and some leading premarket FGS research systems were evaluated to illustrate the continual increase in this performance feature base. Generally, the systems designed for ICG-only imaging have sufficient sensitivity to ICG, but a fraction of the other desired features listed above, with both lower sensitivity and dynamic range. In comparison, the emerging research systems targeted for use with molecular agents have unique capabilities that will be essential for successful clinical imaging studies with low-concentration agents or where superior rejection of ambient light is needed. There is no perfect imaging system, but the feature differences among them are important differentiators in their utility, as outlined in the data and tables here.
Specific feature of magnetooptical images of stray fields of magnets of various geometrical shapes
NASA Astrophysics Data System (ADS)
Ivanov, V. E.; Koveshnikov, A. V.; Andreev, S. V.
2017-08-01
Specific features of magnetooptical images (MOIs) of stray fields near the faces of prismatic hard magnetic elements have been studied. Attention has primarily been focused on MOIs of fields near faces oriented perpendicular to the magnetic moment of hard magnetic elements. With regard to the polar sensitivity, MOIs have practically uniform brightness and geometrically they coincide with the figures of the bases of the elements. With regard to longitudinal sensitivity, MOIs consist of several sectors, the number of which is determined by the number of angles of the image. Each angle is divided by the bisectrix into two sectors of different brightnesses; therefore, the MOI of a triangular magnet consists of three sectors. A rectangle consists of four sectors separated by the bisectrices of the interior angles. In all types of figures, these lines converge at the center of the figure and form a singular point of the source or sink type.
NASA Technical Reports Server (NTRS)
Phillips, M. S.; Moersch, J. E.; Cabrol, N. A.; Davila, A. F.
2018-01-01
The guiding theme of Mars exploration is shifting from global and regional habitability assessment to biosignature detection. To locate features likely to contain biosignatures, it is useful to focus on the reliable identification of specific habitats with high biosignature preservation potential. Proposed chloride deposits on Mars may represent evaporitic environments conducive to the preservation of biosignatures. Analogous chloride- bearing, salt-encrusted playas (salars) are a habitat for life in the driest parts of the Atacama Desert, and are also environments with a taphonomic window. The specific geologic features that harbor and preserve microorganisms in Atacama salars are sub- meter to meter scale salt protuberances, or halite nodules. This study focuses on the ability to recognize and map halite nodules using images acquired from an unmanned aerial vehicle (UAV) at spatial resolutions ranging from mm/pixel to that of the highest resolution orbital images available for Mars.
Can CT imaging features of ground-glass opacity predict invasiveness? A meta-analysis.
Dai, Jian; Yu, Guoyou; Yu, Jianqiang
2018-04-01
A meta-analysis was conducted to investigate the diagnostic performance of computed tomography (CT) imaging features of ground-glass opacity (GGO) to predict invasiveness. Two reviewers independently searched PubMed, Medline, Web of Science, Cochrane Embase and CNKI for relevant studies. CT imaging signs of bubble lucency, speculation, lobulated margin, and pleural indentation were used as diagnostic references to discriminate pre-invasive and invasive disease. The sensitivity, specificity, diagnostic odds ratio (DOR), summary receiver operating characteristic (SROC) curves, and the area under the SROC curve (AUC) were calculated to evaluate diagnostic efficiency. Twelve studies were finally included. Diagnostic performance ranged from 0.41 to 0.52 for sensitivity and 0.56 to 0.63 for specificity. The diagnostic positive and negative likelihood ratios ranged from 1.03 to 2.13 and 0.52 to 1.05, respectively. The DORs of the GGO CT features for discriminating invasive disease ranged from 1.02 to 4.00. The area under the ROC curve was also low, with a range of 0.60 to 0.67 for discriminating pre-invasive and invasive disease. The diagnostic value of a single CT imaging sign of GGO, such as bubble lucency, speculation, lobulated margin, or pleural indentation is limited for discriminating pre-invasive and invasive disease because of low sensitivity, specificity, and AUC. © 2018 The Authors. Thoracic Cancer published by China Lung Oncology Group and John Wiley & Sons Australia, Ltd.
Crowley, J.K.; Brickey, D.W.; Rowan, L.C.
1989-01-01
Airborne imaging spectrometer data collected in the near-infrared (1.2-2.4 ??m) wavelength range were used to study the spectral expression of metamorphic minerals and rocks in the Ruby Mountains of southwestern Montana. The data were analyzed by using a new data enhancement procedure-the construction of relative absorption band-depth (RBD) images. RBD images, like bandratio images, are designed to detect diagnostic mineral absorption features, while minimizing reflectance variations related to topographic slope and albedo differences. To produce an RBD image, several data channels near an absorption band shoulder are summed and then divided by the sum of several channels located near the band minimum. RBD images are both highly specific and sensitive to the presence of particular mineral absorption features. Further, the technique does not distort or subdue spectral features as sometimes occurs when using other data normalization methods. By using RBD images, a number of rock and soil units were distinguished in the Ruby Mountains including weathered quartz - feldspar pegmatites, marbles of several compositions, and soils developed over poorly exposed mica schists. The RBD technique is especially well suited for detecting weak near-infrared spectral features produced by soils, which may permit improved mapping of subtle lithologic and structural details in semiarid terrains. The observation of soils rich in talc, an important industrial commodity in the study area, also indicates that RBD images may be useful for mineral exploration. ?? 1989.
2017-04-19
This enhanced color Jupiter image, taken by the JunoCam imager on NASA's Juno spacecraft, showcases several interesting features on the apparent edge (limb) of the planet. Prior to Juno's fifth flyby over Jupiter's mysterious cloud tops, members of the public voted on which targets JunoCam should image. This picture captures not only a fascinating variety of textures in Jupiter's atmosphere, it also features three specific points of interest: "String of Pearls," "Between the Pearls," and "An Interesting Band Point." Also visible is what's known as the STB Spectre, a feature in Jupiter's South Temperate Belt where multiple atmospheric conditions appear to collide. JunoCam images of Jupiter sometimes appear to have an odd shape. This is because the Juno spacecraft is so close to Jupiter that it cannot capture the entire illuminated area in one image -- the sides get cut off. Juno acquired this image on March 27, 2017, at 2:12 a.m. PDT (5:12 a.m. EDT), as the spacecraft performed a close flyby of Jupiter. When the image was taken, the spacecraft was about 12,400 miles (20,000 kilometers) from the planet. This enhanced color image was created by citizen scientist Bjorn Jonsson. https://photojournal.jpl.nasa.gov/catalog/PIA21389
NASA Astrophysics Data System (ADS)
Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron
2005-04-01
Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.
Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas
2015-06-30
We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Computer assisted optical biopsy for colorectal polyps
NASA Astrophysics Data System (ADS)
Navarro-Avila, Fernando J.; Saint-Hill-Febles, Yadira; Renner, Janis; Klare, Peter; von Delius, Stefan; Navab, Nassir; Mateus, Diana
2017-03-01
We propose a method for computer-assisted optical biopsy for colorectal polyps, with the final goal of assisting the medical expert during the colonoscopy. In particular, we target the problem of automatic classification of polyp images in two classes: adenomatous vs non-adenoma. Our approach is based on recent advancements in convolutional neural networks (CNN) for image representation. In the paper, we describe and compare four different methodologies to address the binary classification task: a baseline with classical features and a Random Forest classifier, two methods based on features obtained from a pre-trained network, and finally, the end-to-end training of a CNN. With the pre-trained network, we show the feasibility of transferring a feature extraction mechanism trained on millions of natural images, to the task of classifying adenomatous polyps. We then demonstrate further performance improvements when training the CNN for our specific classification task. In our study, 776 polyp images were acquired and histologically analyzed after polyp resection. We report a performance increase of the CNN-based approaches with respect to both, the conventional engineered features and to a state-of-the-art method based on videos and 3D shape features.
Bag-of-features approach for improvement of lung tissue classification in diffuse lung disease
NASA Astrophysics Data System (ADS)
Kato, Noriji; Fukui, Motofumi; Isozaki, Takashi
2009-02-01
Many automated techniques have been proposed to classify diffuse lung disease patterns. Most of the techniques utilize texture analysis approaches with second and higher order statistics, and show successful classification result among various lung tissue patterns. However, the approaches do not work well for the patterns with inhomogeneous texture distribution within a region of interest (ROI), such as reticular and honeycombing patterns, because the statistics can only capture averaged feature over the ROI. In this work, we have introduced the bag-of-features approach to overcome this difficulty. In the approach, texture images are represented as histograms or distributions of a few basic primitives, which are obtained by clustering local image features. The intensity descriptor and the Scale Invariant Feature Transformation (SIFT) descriptor are utilized to extract the local features, which have significant discriminatory power due to their specificity to a particular image class. In contrast, the drawback of the local features is lack of invariance under translation and rotation. We improved the invariance by sampling many local regions so that the distribution of the local features is unchanged. We evaluated the performance of our system in the classification task with 5 image classes (ground glass, reticular, honeycombing, emphysema, and normal) using 1109 ROIs from 211 patients. Our system achieved high classification accuracy of 92.8%, which is superior to that of the conventional system with the gray level co-occurrence matrix (GLCM) feature especially for inhomogeneous texture patterns.
NASA Astrophysics Data System (ADS)
Lee, Youngjoo; Kim, Namkug; Seo, Joon Beom; Lee, JuneGoo; Kang, Suk Ho
2007-03-01
In this paper, we proposed novel shape features to improve classification performance of differentiating obstructive lung diseases, based on HRCT (High Resolution Computerized Tomography) images. The images were selected from HRCT images, obtained from 82 subjects. For each image, two experienced radiologists selected rectangular ROIs with various sizes (16x16, 32x32, and 64x64 pixels), representing each disease or normal lung parenchyma. Besides thirteen textural features, we employed additional seven shape features; cluster shape features, and Top-hat transform features. To evaluate the contribution of shape features for differentiation of obstructive lung diseases, several experiments were conducted with two different types of classifiers and various ROI sizes. For automated classification, the Bayesian classifier and support vector machine (SVM) were implemented. To assess the performance and cross-validation of the system, 5-folding method was used. In comparison to employing only textural features, adding shape features yields significant enhancement of overall sensitivity(5.9, 5.4, 4.4% in the Bayesian and 9.0, 7.3, 5.3% in the SVM), in the order of ROI size 16x16, 32x32, 64x64 pixels, respectively (t-test, p<0.01). Moreover, this enhancement was largely due to the improvement on class-specific sensitivity of mild centrilobular emphysema and bronchiolitis obliterans which are most hard to differentiate for radiologists. According to these experimental results, adding shape features to conventional texture features is much useful to improve classification performance of obstructive lung diseases in both Bayesian and SVM classifiers.
Wang, Juan; Nishikawa, Robert M; Yang, Yongyi
2017-07-01
Mammograms acquired with full-field digital mammography (FFDM) systems are provided in both "for-processing'' and "for-presentation'' image formats. For-presentation images are traditionally intended for visual assessment by the radiologists. In this study, we investigate the feasibility of using for-presentation images in computerized analysis and diagnosis of microcalcification (MC) lesions. We make use of a set of 188 matched mammogram image pairs of MC lesions from 95 cases (biopsy proven), in which both for-presentation and for-processing images are provided for each lesion. We then analyze and characterize the MC lesions from for-presentation images and compare them with their counterparts in for-processing images. Specifically, we consider three important aspects in computer-aided diagnosis (CAD) of MC lesions. First, we quantify each MC lesion with a set of 10 image features of clustered MCs and 12 textural features of the lesion area. Second, we assess the detectability of individual MCs in each lesion from the for-presentation images by a commonly used difference-of-Gaussians (DoG) detector. Finally, we study the diagnostic accuracy in discriminating between benign and malignant MC lesions from the for-presentation images by a pretrained support vector machine (SVM) classifier. To accommodate the underlying background suppression and image enhancement in for-presentation images, a normalization procedure is applied. The quantitative image features of MC lesions from for-presentation images are highly consistent with that from for-processing images. The values of Pearson's correlation coefficient between features from the two formats range from 0.824 to 0.961 for the 10 MC image features, and from 0.871 to 0.963 for the 12 textural features. In detection of individual MCs, the FROC curve from for-presentation is similar to that from for-processing. In particular, at sensitivity level of 80%, the average number of false-positives (FPs) per image region is 9.55 for both for-presentation and for-processing images. Finally, for classifying MC lesions as malignant or benign, the area under the ROC curve is 0.769 in for-presentation, compared to 0.761 in for-processing (P = 0.436). The quantitative results demonstrate that MC lesions in for-presentation images are highly consistent with that in for-processing images in terms of image features, detectability of individual MCs, and classification accuracy between malignant and benign lesions. These results indicate that for-presentation images can be compatible with for-processing images for use in CAD algorithms for MC lesions. © 2017 American Association of Physicists in Medicine.
MRI texture features as biomarkers to predict MGMT methylation status in glioblastomas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J., E-mail: bje@mayo.edu
Purpose: Imaging biomarker research focuses on discovering relationships between radiological features and histological findings. In glioblastoma patients, methylation of the O{sup 6}-methylguanine methyltransferase (MGMT) gene promoter is positively correlated with an increased effectiveness of current standard of care. In this paper, the authors investigate texture features as potential imaging biomarkers for capturing the MGMT methylation status of glioblastoma multiforme (GBM) tumors when combined with supervised classification schemes. Methods: A retrospective study of 155 GBM patients with known MGMT methylation status was conducted. Co-occurrence and run length texture features were calculated, and both support vector machines (SVMs) and random forest classifiersmore » were used to predict MGMT methylation status. Results: The best classification system (an SVM-based classifier) had a maximum area under the receiver-operating characteristic (ROC) curve of 0.85 (95% CI: 0.78–0.91) using four texture features (correlation, energy, entropy, and local intensity) originating from the T2-weighted images, yielding at the optimal threshold of the ROC curve, a sensitivity of 0.803 and a specificity of 0.813. Conclusions: Results show that supervised machine learning of MRI texture features can predict MGMT methylation status in preoperative GBM tumors, thus providing a new noninvasive imaging biomarker.« less
The Electronic View Box: a software tool for radiation therapy treatment verification.
Bosch, W R; Low, D A; Gerber, R L; Michalski, J M; Graham, M V; Perez, C A; Harms, W B; Purdy, J A
1995-01-01
We have developed a software tool for interactively verifying treatment plan implementation. The Electronic View Box (EVB) tool copies the paradigm of current practice but does so electronically. A portal image (online portal image or digitized port film) is displayed side by side with a prescription image (digitized simulator film or digitally reconstructed radiograph). The user can measure distances between features in prescription and portal images and "write" on the display, either to approve the image or to indicate required corrective actions. The EVB tool also provides several features not available in conventional verification practice using a light box. The EVB tool has been written in ANSI C using the X window system. The tool makes use of the Virtual Machine Platform and Foundation Library specifications of the NCI-sponsored Radiation Therapy Planning Tools Collaborative Working Group for portability into an arbitrary treatment planning system that conforms to these specifications. The present EVB tool is based on an earlier Verification Image Review tool, but with a substantial redesign of the user interface. A graphical user interface prototyping system was used in iteratively refining the tool layout to allow rapid modifications of the interface in response to user comments. Features of the EVB tool include 1) hierarchical selection of digital portal images based on physician name, patient name, and field identifier; 2) side-by-side presentation of prescription and portal images at equal magnification and orientation, and with independent grayscale controls; 3) "trace" facility for outlining anatomical structures; 4) "ruler" facility for measuring distances; 5) zoomed display of corresponding regions in both images; 6) image contrast enhancement; and 7) communication of portal image evaluation results (approval, block modification, repeat image acquisition, etc.). The EVB tool facilitates the rapid comparison of prescription and portal images and permits electronic communication of corrections in port shape and positioning.
NASA Astrophysics Data System (ADS)
Litjens, G. J. S.; Elliott, R.; Shih, N.; Feldman, M.; Barentsz, J. O.; Hulsbergen-van de Kaa, C. A.; Kovacs, I.; Huisman, H. J.; Madabhushi, A.
2014-03-01
Learning how to separate benign confounders from prostate cancer is important because the imaging characteristics of these confounders are poorly understood. Furthermore, the typical representations of the MRI parameters might not be enough to allow discrimination. The diagnostic uncertainty this causes leads to a lower diagnostic accuracy. In this paper a new cascaded classifier is introduced to separate prostate cancer and benign confounders on MRI in conjunction with specific computer-extracted features to distinguish each of the benign classes (benign prostatic hyperplasia (BPH), inflammation, atrophy or prostatic intra-epithelial neoplasia (PIN). In this study we tried to (1) calculate different mathematical representations of the MRI parameters which more clearly express subtle differences between different classes, (2) learn which of the MRI image features will allow to distinguish specific benign confounders from prostate cancer, and (2) find the combination of computer-extracted MRI features to best discriminate cancer from the confounding classes using a cascaded classifier. One of the most important requirements for identifying MRI signatures for adenocarcinoma, BPH, atrophy, inflammation, and PIN is accurate mapping of the location and spatial extent of the confounder and cancer categories from ex vivo histopathology to MRI. Towards this end we employed an annotated prostatectomy data set of 31 patients, all of whom underwent a multi-parametric 3 Tesla MRI prior to radical prostatectomy. The prostatectomy slides were carefully co-registered to the corresponding MRI slices using an elastic registration technique. We extracted texture features from the T2-weighted imaging, pharmacokinetic features from the dynamic contrast enhanced imaging and diffusion features from the diffusion-weighted imaging for each of the confounder classes and prostate cancer. These features were selected because they form the mainstay of clinical diagnosis. Relevant features for each of the classes were selected using maximum relevance minimum redundancy feature selection, allowing us to perform classifier independent feature selection. The selected features were then incorporated in a cascading classifier, which can focus on easier sub-tasks at each stage, leaving the more difficult classification tasks for later stages. Results show that distinct features are relevant for each of the benign classes, for example the fraction of extra-vascular, extra-cellular space in a voxel is a clear discriminator for inflammation. Furthermore, the cascaded classifier outperforms both multi-class and one-shot classifiers in overall accuracy for discriminating confounders from cancer: 0.76 versus 0.71 and 0.62.
Deep learning based classification of breast tumors with shear-wave elastography.
Zhang, Qi; Xiao, Yang; Dai, Wei; Suo, Jingfeng; Wang, Congzhi; Shi, Jun; Zheng, Hairong
2016-12-01
This study aims to build a deep learning (DL) architecture for automated extraction of learned-from-data image features from the shear-wave elastography (SWE), and to evaluate the DL architecture in differentiation between benign and malignant breast tumors. We construct a two-layer DL architecture for SWE feature extraction, comprised of the point-wise gated Boltzmann machine (PGBM) and the restricted Boltzmann machine (RBM). The PGBM contains task-relevant and task-irrelevant hidden units, and the task-relevant units are connected to the RBM. Experimental evaluation was performed with five-fold cross validation on a set of 227 SWE images, 135 of benign tumors and 92 of malignant tumors, from 121 patients. The features learned with our DL architecture were compared with the statistical features quantifying image intensity and texture. Results showed that the DL features achieved better classification performance with an accuracy of 93.4%, a sensitivity of 88.6%, a specificity of 97.1%, and an area under the receiver operating characteristic curve of 0.947. The DL-based method integrates feature learning with feature selection on SWE. It may be potentially used in clinical computer-aided diagnosis of breast cancer. Copyright © 2016 Elsevier B.V. All rights reserved.
Graña, M; Termenon, M; Savio, A; Gonzalez-Pinto, A; Echeveste, J; Pérez, J M; Besga, A
2011-09-20
The aim of this paper is to obtain discriminant features from two scalar measures of Diffusion Tensor Imaging (DTI) data, Fractional Anisotropy (FA) and Mean Diffusivity (MD), and to train and test classifiers able to discriminate Alzheimer's Disease (AD) patients from controls on the basis of features extracted from the FA or MD volumes. In this study, support vector machine (SVM) classifier was trained and tested on FA and MD data. Feature selection is done computing the Pearson's correlation between FA or MD values at voxel site across subjects and the indicative variable specifying the subject class. Voxel sites with high absolute correlation are selected for feature extraction. Results are obtained over an on-going study in Hospital de Santiago Apostol collecting anatomical T1-weighted MRI volumes and DTI data from healthy control subjects and AD patients. FA features and a linear SVM classifier achieve perfect accuracy, sensitivity and specificity in several cross-validation studies, supporting the usefulness of DTI-derived features as an image-marker for AD and to the feasibility of building Computer Aided Diagnosis systems for AD based on them. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Lee, Hansang; Hong, Helen; Kim, Junmo; Jung, Dae Chul
2018-04-01
To develop an automatic deep feature classification (DFC) method for distinguishing benign angiomyolipoma without visible fat (AMLwvf) from malignant clear cell renal cell carcinoma (ccRCC) from abdominal contrast-enhanced computer tomography (CE CT) images. A dataset including 80 abdominal CT images of 39 AMLwvf and 41 ccRCC patients was used. We proposed a DFC method for differentiating the small renal masses (SRM) into AMLwvf and ccRCC using the combination of hand-crafted and deep features, and machine learning classifiers. First, 71-dimensional hand-crafted features (HCF) of texture and shape were extracted from the SRM contours. Second, 1000-4000-dimensional deep features (DF) were extracted from the ImageNet pretrained deep learning model with the SRM image patches. In DF extraction, we proposed the texture image patches (TIP) to emphasize the texture information inside the mass in DFs and reduce the mass size variability. Finally, the two features were concatenated and the random forest (RF) classifier was trained on these concatenated features to classify the types of SRMs. The proposed method was tested on our dataset using leave-one-out cross-validation and evaluated using accuracy, sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and area under receiver operating characteristics curve (AUC). In experiments, the combinations of four deep learning models, AlexNet, VGGNet, GoogleNet, and ResNet, and four input image patches, including original, masked, mass-size, and texture image patches, were compared and analyzed. In qualitative evaluation, we observed the change in feature distributions between the proposed and comparative methods using tSNE method. In quantitative evaluation, we evaluated and compared the classification results, and observed that (a) the proposed HCF + DF outperformed HCF-only and DF-only, (b) AlexNet showed generally the best performances among the CNN models, and (c) the proposed TIPs not only achieved the competitive performances among the input patches, but also steady performance regardless of CNN models. As a result, the proposed method achieved the accuracy of 76.6 ± 1.4% for the proposed HCF + DF with AlexNet and TIPs, which improved the accuracy by 6.6%p and 8.3%p compared to HCF-only and DF-only, respectively. The proposed shape features and TIPs improved the HCFs and DFs, respectively, and the feature concatenation further enhanced the quality of features for differentiating AMLwvf from ccRCC in abdominal CE CT images. © 2018 American Association of Physicists in Medicine.
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.
Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng
2018-01-01
In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.
Analysis of Variance in Statistical Image Processing
NASA Astrophysics Data System (ADS)
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
Detection of potential mosquito breeding sites based on community sourced geotagged images
NASA Astrophysics Data System (ADS)
Agarwal, Ankit; Chaudhuri, Usashi; Chaudhuri, Subhasis; Seetharaman, Guna
2014-06-01
Various initiatives have been taken all over the world to involve the citizens in the collection and reporting of data to make better and informed data-driven decisions. Our work shows how the geotagged images collected through the general population can be used to combat Malaria and Dengue by identifying and visualizing localities that contain potential mosquito breeding sites. Our method first employs image quality assessment on the client side to reject the images with distortions like blur and artifacts. Each geotagged image received on the server is converted into a feature vector using the bag of visual words model. We train an SVM classifier on a histogram-based feature vector obtained after the vector quantization of SIFT features to discriminate images containing either a small stagnant water body like puddle, or open containers and tires, bushes etc. from those that contain flowing water, manicured lawns, tires attached to a vehicle etc. A geographical heat map is generated by assigning a specific location a probability value of it being a potential mosquito breeding ground of mosquito using feature level fusion or the max approach presented in the paper. The heat map thus generated can be used by concerned health authorities to take appropriate action and to promote civic awareness.
Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui
2017-12-01
Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.
NASA Astrophysics Data System (ADS)
Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi
2016-10-01
The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.
Vulnerability Analysis of HD Photo Image Viewer Applications
2007-09-01
the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard in the digital photography market. With massive efforts...renamed to HD Photo in November of 2006, is being touted as the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard...associated state-of-the-art compression algorithm “specifically designed [for] all types of continuous tone photographic” images [HDPhotoFeatureSpec
A similarity-based data warehousing environment for medical images.
Teixeira, Jefferson William; Annibal, Luana Peixoto; Felipe, Joaquim Cezar; Ciferri, Ricardo Rodrigues; Ciferri, Cristina Dutra de Aguiar
2015-11-01
A core issue of the decision-making process in the medical field is to support the execution of analytical (OLAP) similarity queries over images in data warehousing environments. In this paper, we focus on this issue. We propose imageDWE, a non-conventional data warehousing environment that enables the storage of intrinsic features taken from medical images in a data warehouse and supports OLAP similarity queries over them. To comply with this goal, we introduce the concept of perceptual layer, which is an abstraction used to represent an image dataset according to a given feature descriptor in order to enable similarity search. Based on this concept, we propose the imageDW, an extended data warehouse with dimension tables specifically designed to support one or more perceptual layers. We also detail how to build an imageDW and how to load image data into it. Furthermore, we show how to process OLAP similarity queries composed of a conventional predicate and a similarity search predicate that encompasses the specification of one or more perceptual layers. Moreover, we introduce an index technique to improve the OLAP query processing over images. We carried out performance tests over a data warehouse environment that consolidated medical images from exams of several modalities. The results demonstrated the feasibility and efficiency of our proposed imageDWE to manage images and to process OLAP similarity queries. The results also demonstrated that the use of the proposed index technique guaranteed a great improvement in query processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tyagi, Neelam; Sutton, Elizabeth; Hunt, Margie; Zhang, Jing; Oh, Jung Hun; Apte, Aditya; Mechalakos, James; Wilgucki, Molly; Gelb, Emily; Mehrara, Babak; Matros, Evan; Ho, Alice
2017-02-01
Capsular contracture (CC) is a serious complication in patients receiving implant-based reconstruction for breast cancer. Currently, no objective methods are available for assessing CC. The goal of the present study was to identify image-based surrogates of CC using magnetic resonance imaging (MRI). We analyzed a retrospective data set of 50 patients who had undergone both a diagnostic MRI scan and a plastic surgeon's evaluation of the CC score (Baker's score) within a 6-month period after mastectomy and reconstructive surgery. The MRI scans were assessed for morphologic shape features of the implant and histogram features of the pectoralis muscle. The shape features, such as roundness, eccentricity, solidity, extent, and ratio length for the implant, were compared with the Baker score. For the pectoralis muscle, the muscle width and median, skewness, and kurtosis of the intensity were compared with the Baker score. Univariate analysis (UVA) using a Wilcoxon rank-sum test and multivariate analysis with the least absolute shrinkage and selection operator logistic regression was performed to determine significant differences in these features between the patient groups categorized according to their Baker's scores. UVA showed statistically significant differences between grade 1 and grade ≥2 for morphologic shape features and histogram features, except for volume and skewness. Only eccentricity, ratio length, and volume were borderline significant in differentiating grade ≤2 and grade ≥3. Features with P<.1 on UVA were used in the multivariate least absolute shrinkage and selection operator logistic regression analysis. Multivariate analysis showed a good level of predictive power for grade 1 versus grade ≥2 CC (area under the receiver operating characteristic curve 0.78, sensitivity 0.78, and specificity 0.82) and for grade ≤2 versus grade ≥3 CC (area under the receiver operating characteristic curve 0.75, sensitivity 0.75, and specificity 0.79). The morphologic shape features described on MR images were associated with the severity of CC. MRI has the potential to further improve the diagnostic ability of the Baker score in breast cancer patients who undergo implant reconstruction. Copyright © 2016 Elsevier Inc. All rights reserved.
Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.
Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua
2011-01-01
Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.
Segmentation of prostate biopsy needles in transrectal ultrasound images
NASA Astrophysics Data System (ADS)
Krefting, Dagmar; Haupt, Barbara; Tolxdorff, Thomas; Kempkensteffen, Carsten; Miller, Kurt
2007-03-01
Prostate cancer is the most common cancer in men. Tissue extraction at different locations (biopsy) is the gold-standard for diagnosis of prostate cancer. These biopsies are commonly guided by transrectal ultrasound imaging (TRUS). Exact location of the extracted tissue within the gland is desired for more specific diagnosis and provides better therapy planning. While the orientation and the position of the needle within clinical TRUS image are limited, the appearing length and visibility of the needle varies strongly. Marker lines are present and tissue inhomogeneities and deflection artefacts may appear. Simple intensity, gradient oder edge-detecting based segmentation methods fail. Therefore a multivariate statistical classificator is implemented. The independent feature model is built by supervised learning using a set of manually segmented needles. The feature space is spanned by common binary object features as size and eccentricity as well as imaging-system dependent features like distance and orientation relative to the marker line. The object extraction is done by multi-step binarization of the region of interest. The ROI is automatically determined at the beginning of the segmentation and marker lines are removed from the images. The segmentation itself is realized by scale-invariant classification using maximum likelihood estimation and Mahalanobis distance as discriminator. The technique presented here could be successfully applied in 94% of 1835 TRUS images from 30 tissue extractions. It provides a robust method for biopsy needle localization in clinical prostate biopsy TRUS images.
Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L
2018-06-19
This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.
Saund, Eric
2013-10-01
Effective object and scene classification and indexing depend on extraction of informative image features. This paper shows how large families of complex image features in the form of subgraphs can be built out of simpler ones through construction of a graph lattice—a hierarchy of related subgraphs linked in a lattice. Robustness is achieved by matching many overlapping and redundant subgraphs, which allows the use of inexpensive exact graph matching, instead of relying on expensive error-tolerant graph matching to a minimal set of ideal model graphs. Efficiency in exact matching is gained by exploitation of the graph lattice data structure. Additionally, the graph lattice enables methods for adaptively growing a feature space of subgraphs tailored to observed data. We develop the approach in the domain of rectilinear line art, specifically for the practical problem of document forms recognition. We are especially interested in methods that require only one or very few labeled training examples per category. We demonstrate two approaches to using the subgraph features for this purpose. Using a bag-of-words feature vector we achieve essentially single-instance learning on a benchmark forms database, following an unsupervised clustering stage. Further performance gains are achieved on a more difficult dataset using a feature voting method and feature selection procedure.
Abbasian Ardakani, Ali; Reiazi, Reza; Mohammadi, Afshin
2018-03-30
This study investigated the potential of a clinical decision support approach for the classification of metastatic and tumor-free cervical lymph nodes (LNs) in papillary thyroid carcinoma on the basis of radiologic and textural analysis through ultrasound (US) imaging. In this research, 170 metastatic and 170 tumor-free LNs were examined by the proposed clinical decision support method. To discover the difference between the groups, US imaging was used for the extraction of radiologic and textural features. The radiologic features in the B-mode scans included the echogenicity, margin, shape, and presence of microcalcification. To extract the textural features, a wavelet transform was applied. A support vector machine classifier was used to classify the LNs. In the training set data, a combination of radiologic and textural features represented the best performance with sensitivity, specificity, accuracy, and area under the curve (AUC) values of 97.14%, 98.57%, 97.86%, and 0.994, respectively, whereas the classification based on radiologic and textural features alone yielded lower performance, with AUCs of 0.964 and 0.922. On testing the data set, the proposed model could classify the tumor-free and metastatic LNs with an AUC of 0.952, which corresponded to sensitivity, specificity, and accuracy of 93.33%, 96.66%, and 95.00%. The clinical decision support method based on textural and radiologic features has the potential to characterize LNs via 2-dimensional US. Therefore, it can be used as a supplementary technique in daily clinical practice to improve radiologists' understanding of conventional US imaging for characterizing LNs. © 2018 by the American Institute of Ultrasound in Medicine.
Robot acting on moving bodies (RAMBO): Preliminary results
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madju; Harwood, David
1989-01-01
A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
Action recognition in depth video from RGB perspective: A knowledge transfer manner
NASA Astrophysics Data System (ADS)
Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen
2018-03-01
Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.
A review of EO image information mining
NASA Astrophysics Data System (ADS)
Quartulli, Marco; Olaizola, Igor G.
2013-01-01
We analyze the state of the art of content-based retrieval in Earth observation image archives focusing on complete systems showing promise for operational implementation. The different paradigms at the basis of the main system families are introduced. The approaches taken are considered, focusing in particular on the phases after primitive feature extraction. The solutions envisaged for the issues related to feature simplification and synthesis, indexing, semantic labeling are reviewed. The methodologies for query specification and execution are evaluated. Conclusions are drawn on the state of published research in Earth observation (EO) mining.
Comprehensive Computational Pathological Image Analysis Predicts Lung Cancer Prognosis.
Luo, Xin; Zang, Xiao; Yang, Lin; Huang, Junzhou; Liang, Faming; Rodriguez-Canales, Jaime; Wistuba, Ignacio I; Gazdar, Adi; Xie, Yang; Xiao, Guanghua
2017-03-01
Pathological examination of histopathological slides is a routine clinical procedure for lung cancer diagnosis and prognosis. Although the classification of lung cancer has been updated to become more specific, only a small subset of the total morphological features are taken into consideration. The vast majority of the detailed morphological features of tumor tissues, particularly tumor cells' surrounding microenvironment, are not fully analyzed. The heterogeneity of tumor cells and close interactions between tumor cells and their microenvironments are closely related to tumor development and progression. The goal of this study is to develop morphological feature-based prediction models for the prognosis of patients with lung cancer. We developed objective and quantitative computational approaches to analyze the morphological features of pathological images for patients with NSCLC. Tissue pathological images were analyzed for 523 patients with adenocarcinoma (ADC) and 511 patients with squamous cell carcinoma (SCC) from The Cancer Genome Atlas lung cancer cohorts. The features extracted from the pathological images were used to develop statistical models that predict patients' survival outcomes in ADC and SCC, respectively. We extracted 943 morphological features from pathological images of hematoxylin and eosin-stained tissue and identified morphological features that are significantly associated with prognosis in ADC and SCC, respectively. Statistical models based on these extracted features stratified NSCLC patients into high-risk and low-risk groups. The models were developed from training sets and validated in independent testing sets: a predicted high-risk group versus a predicted low-risk group (for patients with ADC: hazard ratio = 2.34, 95% confidence interval: 1.12-4.91, p = 0.024; for patients with SCC: hazard ratio = 2.22, 95% confidence interval: 1.15-4.27, p = 0.017) after adjustment for age, sex, smoking status, and pathologic tumor stage. The results suggest that the quantitative morphological features of tumor pathological images predict prognosis in patients with lung cancer. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Blessy, S A Praylin Selva; Sulochana, C Helen
2015-01-01
Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.
CT versus MR Techniques in the Detection of Cervical Artery Dissection.
Hanning, Uta; Sporns, Peter B; Schmiedel, Meilin; Ringelstein, Erich B; Heindel, Walter; Wiendl, Heinz; Niederstadt, Thomas; Dittrich, Ralf
2017-11-01
Spontaneous cervical artery dissection (sCAD) is an important etiology of juvenile stroke. The gold standard for the diagnosis of sCAD is convential angiography. However, magnetic resonance imaging (MRI)/MR angiography (MRA) and computed tomography (CT)/CT angiography (CTA) are frequently used alternatives. New developments such as multislice CT/CTA have enabled routine acquisition of thinner sections with rapid imaging times. The goal of this study was to compare the capability of recent developed 128-slice CT/CTA to MRI/MRA to detect radiologic features of sCAD. Retrospective review of patients with suspected sCAD (n = 188) in a database of our Stroke center (2008-2014), who underwent CT/CTA and MRI/MRA on initial clinical work-up. A control group of 26 patients was added. All Images were evaluated concerning specific and sensitive radiological features for dissection by two experienced neuroradiologists. Imaging features were compared between the two modalities. Forty patients with 43 dissected arteries received both modalities (29 internal carotid arteries [ICAs] and 14 vertebral arteries [VAs]). All CADs were identified in CT/CTA and MRI/MRA. The features intimal flap, stenosis, and lumen irregularity appeared in both modalities. One high-grade stenosis was identified by CT/CTA that was expected occluded on MRI/MRA. Two MRI/MRA-confirmed pseudoaneurysms were missed by CT/CTA. None of the controls evidenced specific imaging signs for dissection. CT/CTA is a reliable and better available alternative to MRI/MRA for diagnosis of sCAD. CT/CTA should be used to complement MRI/MRA in cases where MRI/MRA suggests occlusion. Copyright © 2017 by the American Society of Neuroimaging.
NASA Astrophysics Data System (ADS)
Huang, Lijuan; Fan, Ming; Li, Lihua; Zhang, Juan; Shao, Guoliang; Zheng, Bin
2016-03-01
Neoadjuvant chemotherapy (NACT) is being used increasingly in the management of patients with breast cancer for systemically reducing the size of primary tumor before surgery in order to improve survival. The clinical response of patients to NACT is correlated with reduced or abolished of their primary tumor, which is important for treatment in the next stage. Recently, the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is used for evaluation of the response of patients to NACT. To measure this correlation, we extracted the dynamic features from the DCE- MRI and performed association analysis between these features and the clinical response to NACT. In this study, 59 patients are screened before NATC, of which 47 are complete or partial response, and 12 are no response. We segmented the breast areas depicted on each MR image by a computer-aided diagnosis (CAD) scheme, registered images acquired from the sequential MR image scan series, and calculated eighteen features extracted from DCE-MRI. We performed SVM with the 18 features for classification between patients of response and no response. Furthermore, 6 of the 18 features are selected to refine the classification by using Genetic Algorithm. The accuracy, sensitivity and specificity are 87%, 95.74% and 50%, respectively. The calculated area under a receiver operating characteristic (ROC) curve is 0.79+/-0.04. This study indicates that the features of DCE-MRI of breast cancer are associated with the response of NACT. Therefore, our method could be helpful for evaluation of NACT in treatment of breast cancer.
Detection of hypertensive retinopathy using vessel measurements and textural features.
Agurto, Carla; Joshi, Vinayak; Nemeth, Sheila; Soliz, Peter; Barriga, Simon
2014-01-01
Features that indicate hypertensive retinopathy have been well described in the medical literature. This paper presents a new system to automatically classify subjects with hypertensive retinopathy (HR) using digital color fundus images. Our method consists of the following steps: 1) normalization and enhancement of the image; 2) determination of regions of interest based on automatic location of the optic disc; 3) segmentation of the retinal vasculature and measurement of vessel width and tortuosity; 4) extraction of color features; 5) classification of vessel segments as arteries or veins; 6) calculation of artery-vein ratios using the six widest (major) vessels for each category; 7) calculation of mean red intensity and saturation values for all arteries; 8) calculation of amplitude-modulation frequency-modulation (AM-FM) features for entire image; and 9) classification of features into HR and non-HR using linear regression. This approach was tested on 74 digital color fundus photographs taken with TOPCON and CANON retinal cameras using leave-one out cross validation. An area under the ROC curve (AUC) of 0.84 was achieved with sensitivity and specificity of 90% and 67%, respectively.
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
NASA Astrophysics Data System (ADS)
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
Molina, David; Pérez-Beteta, Julián; Martínez-González, Alicia; Martino, Juan; Velasquez, Carlos; Arana, Estanislao; Pérez-García, Víctor M
2017-01-01
Textural measures have been widely explored as imaging biomarkers in cancer. However, their robustness under dynamic range and spatial resolution changes in brain 3D magnetic resonance images (MRI) has not been assessed. The aim of this work was to study potential variations of textural measures due to changes in MRI protocols. Twenty patients harboring glioblastoma with pretreatment 3D T1-weighted MRIs were included in the study. Four different spatial resolution combinations and three dynamic ranges were studied for each patient. Sixteen three-dimensional textural heterogeneity measures were computed for each patient and configuration including co-occurrence matrices (CM) features and run-length matrices (RLM) features. The coefficient of variation was used to assess the robustness of the measures in two series of experiments corresponding to (i) changing the dynamic range and (ii) changing the matrix size. No textural measures were robust under dynamic range changes. Entropy was the only textural feature robust under spatial resolution changes (coefficient of variation under 10% in all cases). Textural measures of three-dimensional brain tumor images are not robust neither under dynamic range nor under matrix size changes. Standards should be harmonized to use textural features as imaging biomarkers in radiomic-based studies. The implications of this work go beyond the specific tumor type studied here and pose the need for standardization in textural feature calculation of oncological images.
Progress in atherosclerotic plaque imaging
Soloperto, Giulia; Casciaro, Sergio
2012-01-01
Cardiovascular diseases are the primary cause of mortality in the industrialized world, and arterial obstruction, triggered by rupture-prone atherosclerotic plaques, lead to myocardial infarction and cerebral stroke. Vulnerable plaques do not necessarily occur with flow-limiting stenosis, thus conventional luminographic assessment of the pathology fails to identify unstable lesions. In this review we discuss the currently available imaging modalities used to investigate morphological features and biological characteristics of the atherosclerotic plaque. The different imaging modalities such as ultrasound, magnetic resonance imaging, computed tomography, nuclear imaging and their intravascular applications are illustrated, highlighting their specific diagnostic potential. Clinically available and upcoming methodologies are also reviewed along with the related challenges in their clinical translation, concerning the specific invasiveness, accuracy and cost-effectiveness of these methods. PMID:22937215
Discriminating Projections for Estimating Face Age in Wild Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokola, Ryan A; Bolme, David S; Ricanek, Karl
2014-01-01
We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-classmore » SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.« less
a Novel Framework for Remote Sensing Image Scene Classification
NASA Astrophysics Data System (ADS)
Jiang, S.; Zhao, H.; Wu, W.; Tan, Q.
2018-04-01
High resolution remote sensing (HRRS) images scene classification aims to label an image with a specific semantic category. HRRS images contain more details of the ground objects and their spatial distribution patterns than low spatial resolution images. Scene classification can bridge the gap between low-level features and high-level semantics. It can be applied in urban planning, target detection and other fields. This paper proposes a novel framework for HRRS images scene classification. This framework combines the convolutional neural network (CNN) and XGBoost, which utilizes CNN as feature extractor and XGBoost as a classifier. Then, this framework is evaluated on two different HRRS images datasets: UC-Merced dataset and NWPU-RESISC45 dataset. Our framework achieved satisfying accuracies on two datasets, which is 95.57 % and 83.35 % respectively. From the experiments result, our framework has been proven to be effective for remote sensing images classification. Furthermore, we believe this framework will be more practical for further HRRS scene classification, since it costs less time on training stage.
Stanciu, Stefan G; Xu, Shuoyu; Peng, Qiwen; Yan, Jie; Stanciu, George A; Welsch, Roy E; So, Peter T C; Csucs, Gabor; Yu, Hanry
2014-04-10
The accurate staging of liver fibrosis is of paramount importance to determine the state of disease progression, therapy responses, and to optimize disease treatment strategies. Non-linear optical microscopy techniques such as two-photon excitation fluorescence (TPEF) and second harmonic generation (SHG) can image the endogenous signals of tissue structures and can be used for fibrosis assessment on non-stained tissue samples. While image analysis of collagen in SHG images was consistently addressed until now, cellular and tissue information included in TPEF images, such as inflammatory and hepatic cell damage, equally important as collagen deposition imaged by SHG, remain poorly exploited to date. We address this situation by experimenting liver fibrosis quantification and scoring using a combined approach based on TPEF liver surface imaging on a Thioacetamide-induced rat model and a gradient based Bag-of-Features (BoF) image classification strategy. We report the assessed performance results and discuss the influence of specific BoF parameters to the performance of the fibrosis scoring framework.
Stanciu, Stefan G.; Xu, Shuoyu; Peng, Qiwen; Yan, Jie; Stanciu, George A.; Welsch, Roy E.; So, Peter T. C.; Csucs, Gabor; Yu, Hanry
2014-01-01
The accurate staging of liver fibrosis is of paramount importance to determine the state of disease progression, therapy responses, and to optimize disease treatment strategies. Non-linear optical microscopy techniques such as two-photon excitation fluorescence (TPEF) and second harmonic generation (SHG) can image the endogenous signals of tissue structures and can be used for fibrosis assessment on non-stained tissue samples. While image analysis of collagen in SHG images was consistently addressed until now, cellular and tissue information included in TPEF images, such as inflammatory and hepatic cell damage, equally important as collagen deposition imaged by SHG, remain poorly exploited to date. We address this situation by experimenting liver fibrosis quantification and scoring using a combined approach based on TPEF liver surface imaging on a Thioacetamide-induced rat model and a gradient based Bag-of-Features (BoF) image classification strategy. We report the assessed performance results and discuss the influence of specific BoF parameters to the performance of the fibrosis scoring framework. PMID:24717650
NASA Astrophysics Data System (ADS)
Stanciu, Stefan G.; Xu, Shuoyu; Peng, Qiwen; Yan, Jie; Stanciu, George A.; Welsch, Roy E.; So, Peter T. C.; Csucs, Gabor; Yu, Hanry
2014-04-01
The accurate staging of liver fibrosis is of paramount importance to determine the state of disease progression, therapy responses, and to optimize disease treatment strategies. Non-linear optical microscopy techniques such as two-photon excitation fluorescence (TPEF) and second harmonic generation (SHG) can image the endogenous signals of tissue structures and can be used for fibrosis assessment on non-stained tissue samples. While image analysis of collagen in SHG images was consistently addressed until now, cellular and tissue information included in TPEF images, such as inflammatory and hepatic cell damage, equally important as collagen deposition imaged by SHG, remain poorly exploited to date. We address this situation by experimenting liver fibrosis quantification and scoring using a combined approach based on TPEF liver surface imaging on a Thioacetamide-induced rat model and a gradient based Bag-of-Features (BoF) image classification strategy. We report the assessed performance results and discuss the influence of specific BoF parameters to the performance of the fibrosis scoring framework.
Competitive Advantage of PET/MRI
Jadvar, Hossein; Colletti, Patrick M.
2013-01-01
Multimodality imaging has made great strides in the imaging evaluation of patients with a variety of diseases. Positron emission tomography/computed tomography (PET/CT) is now established as the imaging modality of choice in many clinical conditions, particularly in oncology. While the initial development of combined PET/magnetic resonance imaging (PET/MRI) was in the preclinical arena, hybrid PET/MR scanners are now available for clinical use. PET/MRI combines the unique features of MRI including excellent soft tissue contrast, diffusion-weighted imaging, dynamic contrast-enhanced imaging, fMRI and other specialized sequences as well as MR spectroscopy with the quantitative physiologic information that is provided by PET. Most evidence for the potential clinical utility of PET/MRI is based on studies performed with side-by-side comparison or software-fused MRI and PET images. Data on distinctive utility of hybrid PET/MRI are rapidly emerging. There are potential competitive advantages of PET/MRI over PET/CT. In general, PET/MRI may be preferred over PET/CT where the unique features of MRI provide more robust imaging evaluation in certain clinical settings. The exact role and potential utility of simultaneous data acquisition in specific research and clinical settings will need to be defined. It may be that simultaneous PET/MRI will be best suited for clinical situations that are disease-specific, organ-specific, related to diseases of the children or in those patients undergoing repeated imaging for whom cumulative radiation dose must be kept as low as reasonably achievable. PET/MRI also offers interesting opportunities for use of dual modality probes. Upon clear definition of clinical utility, other important and practical issues related to business operational model, clinical workflow and reimbursement will also be resolved. PMID:23791129
Competitive advantage of PET/MRI.
Jadvar, Hossein; Colletti, Patrick M
2014-01-01
Multimodality imaging has made great strides in the imaging evaluation of patients with a variety of diseases. Positron emission tomography/computed tomography (PET/CT) is now established as the imaging modality of choice in many clinical conditions, particularly in oncology. While the initial development of combined PET/magnetic resonance imaging (PET/MRI) was in the preclinical arena, hybrid PET/MR scanners are now available for clinical use. PET/MRI combines the unique features of MRI including excellent soft tissue contrast, diffusion-weighted imaging, dynamic contrast-enhanced imaging, fMRI and other specialized sequences as well as MR spectroscopy with the quantitative physiologic information that is provided by PET. Most evidence for the potential clinical utility of PET/MRI is based on studies performed with side-by-side comparison or software-fused MRI and PET images. Data on distinctive utility of hybrid PET/MRI are rapidly emerging. There are potential competitive advantages of PET/MRI over PET/CT. In general, PET/MRI may be preferred over PET/CT where the unique features of MRI provide more robust imaging evaluation in certain clinical settings. The exact role and potential utility of simultaneous data acquisition in specific research and clinical settings will need to be defined. It may be that simultaneous PET/MRI will be best suited for clinical situations that are disease-specific, organ-specific, related to diseases of the children or in those patients undergoing repeated imaging for whom cumulative radiation dose must be kept as low as reasonably achievable. PET/MRI also offers interesting opportunities for use of dual modality probes. Upon clear definition of clinical utility, other important and practical issues related to business operational model, clinical workflow and reimbursement will also be resolved. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Yasaka, Koichiro; Akai, Hiroyuki; Mackin, Dennis; Court, Laurence; Moros, Eduardo; Ohtomo, Kuni; Kiryu, Shigeru
2017-05-01
Quantitative computed tomography (CT) texture analyses for images with and without filtration are gaining attention to capture the heterogeneity of tumors. The aim of this study was to investigate how quantitative texture parameters using image filtering vary among different computed tomography (CT) scanners using a phantom developed for radiomics studies.A phantom, consisting of 10 different cartridges with various textures, was scanned under 6 different scanning protocols using four CT scanners from four different vendors. CT texture analyses were performed for both unfiltered images and filtered images (using a Laplacian of Gaussian spatial band-pass filter) featuring fine, medium, and coarse textures. Forty-five regions of interest were placed for each cartridge (x) in a specific scan image set (y), and the average of the texture values (T(x,y)) was calculated. The interquartile range (IQR) of T(x,y) among the 6 scans was calculated for a specific cartridge (IQR(x)), while the IQR of T(x,y) among the 10 cartridges was calculated for a specific scan (IQR(y)), and the median IQR(y) was then calculated for the 6 scans (as the control IQR, IQRc). The median of their quotient (IQR(x)/IQRc) among the 10 cartridges was defined as the variability index (VI).The VI was relatively small for the mean in unfiltered images (0.011) and for standard deviation (0.020-0.044) and entropy (0.040-0.044) in filtered images. Skewness and kurtosis in filtered images featuring medium and coarse textures were relatively variable across different CT scanners, with VIs of 0.638-0.692 and 0.430-0.437, respectively.Various quantitative CT texture parameters are robust and variable among different scanners, and the behavior of these parameters should be taken into consideration.
Planning to avoid trouble in the operating room: experts' formulation of the preoperative plan.
Zilbert, Nathan R; St-Martin, Laurent; Regehr, Glenn; Gallinger, Steven; Moulton, Carol-Anne
2015-01-01
The purpose of this study was to capture the preoperative plans of expert hepato-pancreato-biliary (HPB) surgeons with the goal of finding consistent aspects of the preoperative planning process. HPB surgeons were asked to think aloud when reviewing 4 preoperative computed tomography scans of patients with distal pancreatic tumors. The imaging features they identified and the planned actions they proposed were tabulated. Surgeons viewed the tabulated list of imaging features for each case and rated the relevance of each feature for their subsequent preoperative plan. Average rater intraclass correlation coefficients were calculated for each type of data collected (imaging features detected, planned actions reported, and relevance of each feature) to establish whether the surgeons were consistent with one another in their responses. Average rater intraclass correlation coefficient values greater than 0.7 were considered indicative of consistency. Division of General Surgery, University of Toronto. HPB surgeons affiliated with the University of Toronto. A total of 11 HPB surgeons thought aloud when reviewing 4 computed tomography scans. Surgeons were consistent in the imaging features they detected but inconsistent in the planned actions they reported. Of the HPB surgeons, 8 completed the assessment of feature relevance. For 3 of the 4 cases, the surgeons were consistent in rating the relevance of specific imaging features on their preoperative plans. These results suggest that HPB surgeons are consistent in some aspects of the preoperative planning process but not others. The findings further our understanding of the preoperative planning process and will guide future research on the best ways to incorporate the teaching and evaluation of preoperative planning into surgical training. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Spaceborne radar observations: A guide for Magellan radar-image analysis
NASA Technical Reports Server (NTRS)
Ford, J. P.; Blom, R. G.; Crisp, J. A.; Elachi, Charles; Farr, T. G.; Saunders, R. Stephen; Theilig, E. E.; Wall, S. D.; Yewell, S. B.
1989-01-01
Geologic analyses of spaceborne radar images of Earth are reviewed and summarized with respect to detecting, mapping, and interpreting impact craters, volcanic landforms, eolian and subsurface features, and tectonic landforms. Interpretations are illustrated mostly with Seasat synthetic aperture radar and shuttle-imaging-radar images. Analogies are drawn for the potential interpretation of radar images of Venus, with emphasis on the effects of variation in Magellan look angle with Venusian latitude. In each landform category, differences in feature perception and interpretive capability are related to variations in imaging geometry, spatial resolution, and wavelength of the imaging radar systems. Impact craters and other radially symmetrical features may show apparent bilateral symmetry parallel to the illumination vector at low look angles. The styles of eruption and the emplacement of major and minor volcanic constructs can be interpreted from morphological features observed in images. Radar responses that are governed by small-scale surface roughness may serve to distinguish flow types, but do not provide unambiguous information. Imaging of sand dunes is rigorously constrained by specific angular relations between the illumination vector and the orientation and angle of repose of the dune faces, but is independent of radar wavelength. With a single look angle, conditions that enable shallow subsurface imaging to occur do not provide the information necessary to determine whether the radar has recorded surface or subsurface features. The topographic linearity of many tectonic landforms is enhanced on images at regional and local scales, but the detection of structural detail is a strong function of illumination direction. Nontopographic tectonic lineaments may appear in response to contrasts in small-surface roughness or dielectric constant. The breakpoint for rough surfaces will vary by about 25 percent through the Magellan viewing geometries from low to high Venusian latitudes. Examples of anomalies and system artifacts that can affect image interpretation are described.
Imaging specific cellular glycan structures using glycosyltransferases via click chemistry.
Wu, Zhengliang L; Person, Anthony D; Anderson, Matthew; Burroughs, Barbara; Tatge, Timothy; Khatri, Kshitij; Zou, Yonglong; Wang, Lianchun; Geders, Todd; Zaia, Joseph; Sackstein, Robert
2018-02-01
Heparan sulfate (HS) is a polysaccharide fundamentally important for biologically activities. T/Tn antigens are universal carbohydrate cancer markers. Here, we report the specific imaging of these carbohydrates using a mesenchymal stem cell line and human umbilical vein endothelial cells (HUVEC). The staining specificities were demonstrated by comparing imaging of different glycans and validated by either removal of target glycans, which results in loss of signal, or installation of target glycans, which results in gain of signal. As controls, representative key glycans including O-GlcNAc, lactosaminyl glycans and hyaluronan were also imaged. HS staining revealed novel architectural features of the extracellular matrix (ECM) of HUVEC cells. Results from T/Tn antigen staining suggest that O-GalNAcylation is a rate-limiting step for O-glycan synthesis. Overall, these highly specific approaches for HS and T/Tn antigen imaging should greatly facilitate the detection and functional characterization of these biologically important glycans. © The Author(s) 2017. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-11-06
ImagingSIMS is an open source application for loading, processing, manipulating and visualizing secondary ion mass spectrometry (SIMS) data. At PNNL, a separate branch has been further developed to incorporate application specific features for dynamic SIMS data sets. These include loading CAMECA IMS-1280, NanoSIMS and modified IMS-4f raw data, creating isotopic ratio images and stitching together images from adjacent interrogation regions. In addition to other modifications of the parent open source version, this version is equipped with a point-by-point image registration tool to assist with streamlining the image fusion process.
Guo, Shengwen; Lai, Chunren; Wu, Congling; Cen, Guiyin
2017-01-01
Neuroimaging measurements derived from magnetic resonance imaging provide important information required for detecting changes related to the progression of mild cognitive impairment (MCI). Cortical features and changes play a crucial role in revealing unique anatomical patterns of brain regions, and further differentiate MCI patients from normal states. Four cortical features, namely, gray matter volume, cortical thickness, surface area, and mean curvature, were explored for discriminative analysis among three groups including the stable MCI (sMCI), the converted MCI (cMCI), and the normal control (NC) groups. In this study, 158 subjects (72 NC, 46 sMCI, and 40 cMCI) were selected from the Alzheimer's Disease Neuroimaging Initiative. A sparse-constrained regression model based on the l2-1-norm was introduced to reduce the feature dimensionality and retrieve essential features for the discrimination of the three groups by using a support vector machine (SVM). An optimized strategy of feature addition based on the weight of each feature was adopted for the SVM classifier in order to achieve the best classification performance. The baseline cortical features combined with the longitudinal measurements for 2 years of follow-up data yielded prominent classification results. In particular, the cortical thickness produced a classification with 98.84% accuracy, 97.5% sensitivity, and 100% specificity for the sMCI-cMCI comparison; 92.37% accuracy, 84.78% sensitivity, and 97.22% specificity for the cMCI-NC comparison; and 93.75% accuracy, 92.5% sensitivity, and 94.44% specificity for the sMCI-NC comparison. The best performances obtained by the SVM classifier using the essential features were 5-40% more than those using all of the retained features. The feasibility of the cortical features for the recognition of anatomical patterns was certified; thus, the proposed method has the potential to improve the clinical diagnosis of sub-types of MCI and predict the risk of its conversion to Alzheimer's disease.
Accuracy of ultrasonography and magnetic resonance imaging in the diagnosis of placenta accreta.
Riteau, Anne-Sophie; Tassin, Mikael; Chambon, Guillemette; Le Vaillant, Claudine; de Laveaucoupet, Jocelyne; Quéré, Marie-Pierre; Joubert, Madeleine; Prevot, Sophie; Philippe, Henri-Jean; Benachi, Alexandra
2014-01-01
To evaluate the accuracy of ultrasonography and magnetic resonance imaging (MRI) in the diagnosis of placenta accreta and to define the most relevant specific ultrasound and MRI features that may predict placental invasion. This study was approved by the institutional review board of the French College of Obstetricians and Gynecologists. We retrospectively reviewed the medical records of all patients referred for suspected placenta accreta to two university hospitals from 01/2001 to 05/2012. Our study population included 42 pregnant women who had been investigated by both ultrasonography and MRI. Ultrasound images and MRI were blindly reassessed for each case by 2 raters in order to score features that predict abnormal placental invasion. Sensitivity in the diagnosis of placenta accreta was 100% with ultrasound and 76.9% for MRI (P = 0.03). Specificity was 37.5% with ultrasonography and 50% for MRI (P = 0.6). The features of greatest sensitivity on ultrasonography were intraplacental lacunae and loss of the normal retroplacental clear space. Increased vascularization in the uterine serosa-bladder wall interface and vascularization perpendicular to the uterine wall had the best positive predictive value (92%). At MRI, uterine bulging had the best positive predictive value (85%) and its combination with the presence of dark intraplacental bands on T2-weighted images improved the predictive value to 90%. Ultrasound imaging is the mainstay of screening for placenta accreta. MRI appears to be complementary to ultrasonography, especially when there are few ultrasound signs.
Target-Oriented High-Resolution SAR Image Formation via Semantic Information Guided Regularizations
NASA Astrophysics Data System (ADS)
Hou, Biao; Wen, Zaidao; Jiao, Licheng; Wu, Qian
2018-04-01
Sparsity-regularized synthetic aperture radar (SAR) imaging framework has shown its remarkable performance to generate a feature enhanced high resolution image, in which a sparsity-inducing regularizer is involved by exploiting the sparsity priors of some visual features in the underlying image. However, since the simple prior of low level features are insufficient to describe different semantic contents in the image, this type of regularizer will be incapable of distinguishing between the target of interest and unconcerned background clutters. As a consequence, the features belonging to the target and clutters are simultaneously affected in the generated image without concerning their underlying semantic labels. To address this problem, we propose a novel semantic information guided framework for target oriented SAR image formation, which aims at enhancing the interested target scatters while suppressing the background clutters. Firstly, we develop a new semantics-specific regularizer for image formation by exploiting the statistical properties of different semantic categories in a target scene SAR image. In order to infer the semantic label for each pixel in an unsupervised way, we moreover induce a novel high-level prior-driven regularizer and some semantic causal rules from the prior knowledge. Finally, our regularized framework for image formation is further derived as a simple iteratively reweighted $\\ell_1$ minimization problem which can be conveniently solved by many off-the-shelf solvers. Experimental results demonstrate the effectiveness and superiority of our framework for SAR image formation in terms of target enhancement and clutters suppression, compared with the state of the arts. Additionally, the proposed framework opens a new direction of devoting some machine learning strategies to image formation, which can benefit the subsequent decision making tasks.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
Bessas, D.; Winkler, M.; Sergueev, I.; ...
2015-09-03
We investigate the crystallinity and the lattice dynamics in elemental modulated Sbinline imageTeinline image films microscopically using high energy synchrotron radiation diffraction combined with inline imageSb nuclear inelastic scattering. The correlation length is found to be finite but less than 100 . Moreover, the element specific density of phonon states is extracted. A comparison with the element specific density of phonon states in bulk Sbinline imageTeinline image confirms that the main features in the density of phonon states arise from the layered structure. The average speed of sound at inline image inline image, is almost the same compared to bulkmore » Sbinline imageTeinline image at inline image, inline image. Similarly, the change in the acoustic cut-off energy is within the experimental detection limit. Therefore, we suggest that the lattice thermal conductivity in elemental modulated Sbinline imageTeinline image films should not be significantly changed from its bulk value.« less
Thermography based diagnosis of ruptured anterior cruciate ligament (ACL) in canines
NASA Astrophysics Data System (ADS)
Lama, Norsang; Umbaugh, Scott E.; Mishra, Deependra; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph
2016-09-01
Anterior cruciate ligament (ACL) rupture in canines is a common orthopedic injury in veterinary medicine. Veterinarians use both imaging and non-imaging methods to diagnose the disease. Common imaging methods such as radiography, computed tomography (CT scan) and magnetic resonance imaging (MRI) have some disadvantages: expensive setup, high dose of radiation, and time-consuming. In this paper, we present an alternative diagnostic method based on feature extraction and pattern classification (FEPC) to diagnose abnormal patterns in ACL thermograms. The proposed method was experimented with a total of 30 thermograms for each camera view (anterior, lateral and posterior) including 14 disease and 16 non-disease cases provided from Long Island Veterinary Specialists. The normal and abnormal patterns in thermograms are analyzed in two steps: feature extraction and pattern classification. Texture features based on gray level co-occurrence matrices (GLCM), histogram features and spectral features are extracted from the color normalized thermograms and the computed feature vectors are applied to Nearest Neighbor (NN) classifier, K-Nearest Neighbor (KNN) classifier and Support Vector Machine (SVM) classifier with leave-one-out validation method. The algorithm gives the best classification success rate of 86.67% with a sensitivity of 85.71% and a specificity of 87.5% in ACL rupture detection using NN classifier for the lateral view and Norm-RGB-Lum color normalization method. Our results show that the proposed method has the potential to detect ACL rupture in canines.
A Fine-Scale Functional Logic to Convergence from Retina to Thalamus.
Liang, Liang; Fratzl, Alex; Goldey, Glenn; Ramesh, Rohan N; Sugden, Arthur U; Morgan, Josh L; Chen, Chinfei; Andermann, Mark L
2018-05-31
Numerous well-defined classes of retinal ganglion cells innervate the thalamus to guide image-forming vision, yet the rules governing their convergence and divergence remain unknown. Using two-photon calcium imaging in awake mouse thalamus, we observed a functional arrangement of retinal ganglion cell axonal boutons in which coarse-scale retinotopic ordering gives way to fine-scale organization based on shared preferences for other visual features. Specifically, at the ∼6 μm scale, clusters of boutons from different axons often showed similar preferences for either one or multiple features, including axis and direction of motion, spatial frequency, and changes in luminance. Conversely, individual axons could "de-multiplex" information channels by participating in multiple, functionally distinct bouton clusters. Finally, ultrastructural analyses demonstrated that retinal axonal boutons in a local cluster often target the same dendritic domain. These data suggest that functionally specific convergence and divergence of retinal axons may impart diverse, robust, and often novel feature selectivity to visual thalamus. Copyright © 2018 Elsevier Inc. All rights reserved.
The value of specific MRI features in the evaluation of suspected placental invasion.
Lax, Allison; Prince, Martin R; Mennitt, Kevin W; Schwebach, J Reid; Budorick, Nancy E
2007-01-01
The objective of this study was to determine imaging features that may help predict the presence of placenta accreta, placenta increta or placenta percreta on prenatal MRI scanning. A retrospective review of the prenatal MR scans of 10 patients with a diagnosis of placenta accreta, placenta increta or placenta percreta made by pathologic and clinical reports and of 10 patients without placental invasion was performed. Two expert MRI readers were blinded to the patients' true diagnosis and were asked to score a total of 17 MRI features of the placenta and adjacent structures. The interrater reliability was assessed using kappa statistics. The features with a moderate kappa statistic or better (kappa > .40) were then compared with the true diagnosis for each observer. Seven of the scored features had an interobserver reliability of kappa > .40: placenta previa (kappa = .83); abnormal uterine bulging (kappa = .48); intraplacental hemorrhage (kappa = .51); heterogeneity of signal intensity on T2-weighted (T2W) imaging (kappa = .61); the presence of dark intraplacental bands on T2W imaging (kappa = .53); increased placental thickness (kappa = .69); and visualization of the myometrium beneath the placenta on T2W imaging (kappa = .44). Using Fisher's two-sided exact test, there was a statistically significant difference between the proportion of patients with placental invasion and those without placental invasion for three of the features: abnormal uterine bulging (Rater 1, P = .005; Rater 2, P = .011); heterogeneity of T2W imaging signal intensity (Rater 1, P = .006; Rater 2, P = .010); and presence of dark intraplacental bands on T2W imaging (Rater 1, P = .003; Rater 2, P = .033). MRI can be a useful adjunct to ultrasound in diagnosing placenta accreta prenatally. Three features that are seen on MRI in patients with placental invasion appear to be useful for diagnosis: uterine bulging; heterogeneous signal intensity within the placenta; and the presence of dark intraplacental bands on T2W imaging.
Content-based image retrieval by matching hierarchical attributed region adjacency graphs
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Thies, Christian J.; Guld, Mark O.; Lehmann, Thomas M.
2004-05-01
Content-based image retrieval requires a formal description of visual information. In medical applications, all relevant biological objects have to be represented by this description. Although color as the primary feature has proven successful in publicly available retrieval systems of general purpose, this description is not applicable to most medical images. Additionally, it has been shown that global features characterizing the whole image do not lead to acceptable results in the medical context or that they are only suitable for specific applications. For a general purpose content-based comparison of medical images, local, i.e. regional features that are collected on multiple scales must be used. A hierarchical attributed region adjacency graph (HARAG) provides such a representation and transfers image comparison to graph matching. However, building a HARAG from an image requires a restriction in size to be computationally feasible while at the same time all visually plausible information must be preserved. For this purpose, mechanisms for the reduction of the graph size are presented. Even with a reduced graph, the problem of graph matching remains NP-complete. In this paper, the Similarity Flooding approach and Hopfield-style neural networks are adapted from the graph matching community to the needs of HARAG comparison. Based on synthetic image material build from simple geometric objects, all visually similar regions were matched accordingly showing the framework's general applicability to content-based image retrieval of medical images.
Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery
NASA Astrophysics Data System (ADS)
Gao, Junfeng; Liao, Wenzhi; Nuyttens, David; Lootens, Peter; Vangeyte, Jürgen; Pižurica, Aleksandra; He, Yong; Pieters, Jan G.
2018-05-01
The developments in the use of unmanned aerial vehicles (UAVs) and advanced imaging sensors provide new opportunities for ultra-high resolution (e.g., less than a 10 cm ground sampling distance (GSD)) crop field monitoring and mapping in precision agriculture applications. In this study, we developed a strategy for inter- and intra-row weed detection in early season maize fields from aerial visual imagery. More specifically, the Hough transform algorithm (HT) was applied to the orthomosaicked images for inter-row weed detection. A semi-automatic Object-Based Image Analysis (OBIA) procedure was developed with Random Forests (RF) combined with feature selection techniques to classify soil, weeds and maize. Furthermore, the two binary weed masks generated from HT and OBIA were fused for accurate binary weed image. The developed RF classifier was evaluated by 5-fold cross validation, and it obtained an overall accuracy of 0.945, and Kappa value of 0.912. Finally, the relationship of detected weeds and their ground truth densities was quantified by a fitted linear model with a coefficient of determination of 0.895 and a root mean square error of 0.026. Besides, the importance of input features was evaluated, and it was found that the ratio of vegetation length and width was the most significant feature for the classification model. Overall, our approach can yield a satisfactory weed map, and we expect that the obtained accurate and timely weed map from UAV imagery will be applicable to realize site-specific weed management (SSWM) in early season crop fields for reducing spraying non-selective herbicides and costs.
Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A
2017-03-01
Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM
Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua
2011-01-01
Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364
Dictionary learning-based CT detection of pulmonary nodules
NASA Astrophysics Data System (ADS)
Wu, Panpan; Xia, Kewen; Zhang, Yanbo; Qian, Xiaohua; Wang, Ge; Yu, Hengyong
2016-10-01
Segmentation of lung features is one of the most important steps for computer-aided detection (CAD) of pulmonary nodules with computed tomography (CT). However, irregular shapes, complicated anatomical background and poor pulmonary nodule contrast make CAD a very challenging problem. Here, we propose a novel scheme for feature extraction and classification of pulmonary nodules through dictionary learning from training CT images, which does not require accurately segmented pulmonary nodules. Specifically, two classification-oriented dictionaries and one background dictionary are learnt to solve a two-category problem. In terms of the classification-oriented dictionaries, we calculate sparse coefficient matrices to extract intrinsic features for pulmonary nodule classification. The support vector machine (SVM) classifier is then designed to optimize the performance. Our proposed methodology is evaluated with the lung image database consortium and image database resource initiative (LIDC-IDRI) database, and the results demonstrate that the proposed strategy is promising.
Automated feature extraction in color retinal images by a model based approach.
Li, Huiqi; Chutatape, Opas
2004-02-01
Color retinal photography is an important tool to detect the evidence of various eye diseases. Novel methods to extract the main features in color retinal images have been developed in this paper. Principal component analysis is employed to locate optic disk; A modified active shape model is proposed in the shape detection of optic disk; A fundus coordinate system is established to provide a better description of the features in the retinal images; An approach to detect exudates by the combined region growing and edge detection is proposed. The success rates of disk localization, disk boundary detection, and fovea localization are 99%, 94%, and 100%, respectively. The sensitivity and specificity of exudate detection are 100% and 71%, correspondingly. The success of the proposed algorithms can be attributed to the utilization of the model-based methods. The detection and analysis could be applied to automatic mass screening and diagnosis of the retinal diseases.
Special object extraction from medieval books using superpixels and bag-of-features
NASA Astrophysics Data System (ADS)
Yang, Ying; Rushmeier, Holly
2017-01-01
We propose a method to extract special objects in images of medieval books, which generally represent, for example, figures and capital letters. Instead of working on the single-pixel level, we consider superpixels as the basic classification units for improved time efficiency. More specifically, we classify superpixels into different categories/objects by using a bag-of-features approach, where a superpixel category classifier is trained with the local features of the superpixels of the training images. With the trained classifier, we are able to assign the category labels to the superpixels of a historical document image under test. Finally, special objects can easily be identified and extracted after analyzing the categorization results. Experimental results demonstrate that, as compared to the state-of-the-art algorithms, our method provides comparable performance for some historical books but greatly outperforms them in terms of generality and computational time.
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Morris, Jeffrey S
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry ( Cromwell ) and 2D gel electrophoresis ( Pinnacle ) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods presented are applied to two specific proteomic technologies, MALDI-TOF and 2D gel electrophoresis, these methods and the other principles discussed in the paper apply much more broadly to other expression proteomics technologies.
Using deep learning for detecting gender in adult chest radiographs
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2018-03-01
In this paper, we present a method for automatically identifying the gender of an imaged person using their frontal chest x-ray images. Our work is motivated by the need to determine missing gender information in some datasets. The proposed method employs the technique of convolutional neural network (CNN) based deep learning and transfer learning to overcome the challenge of developing handcrafted features in limited data. Specifically, the method consists of four main steps: pre-processing, CNN feature extractor, feature selection, and classifier. The method is tested on a combined dataset obtained from several sources with varying acquisition quality resulting in different pre-processing steps that are applied for each. For feature extraction, we tested and compared four CNN architectures, viz., AlexNet, VggNet, GoogLeNet, and ResNet. We applied a feature selection technique, since the feature length is larger than the number of images. Two popular classifiers: SVM and Random Forest, are used and compared. We evaluated the classification performance by cross-validation and used seven performance measures. The best performer is the VggNet-16 feature extractor with the SVM classifier, with accuracy of 86.6% and ROC Area being 0.932 for 5-fold cross validation. We also discuss several misclassified cases and describe future work for performance improvement.
NASA Astrophysics Data System (ADS)
van de Moortele, Tristan; Nemes, Andras; Wendt, Christine; Coletti, Filippo
2016-11-01
The morphological features of the airway tree directly affect the air flow features during breathing, which determines the gas exchange and inhaled particle transport. Lung disease, Chronic Obstructive Pulmonary Disease (COPD) in this study, affects the structural features of the lungs, which in turn negatively affects the air flow through the airways. Here bronchial tree air volume geometries are segmented from Computed Tomography (CT) scans of healthy and diseased subjects. Geometrical analysis of the airway centerlines and corresponding cross-sectional areas provide insight into the specific effects of COPD on the airway structure. These geometries are also used to 3D print anatomically accurate, patient specific flow models. Three-component, three-dimensional velocity fields within these models are acquired using Magnetic Resonance Imaging (MRI). The three-dimensional flow fields provide insight into the change in flow patterns and features. Additionally, particle trajectories are determined using the velocity fields, to identify the fate of therapeutic and harmful inhaled aerosols. Correlation between disease-specific and patient-specific anatomical features with dysfunctional airflow patterns can be achieved by combining geometrical and flow analysis.
Feature selection from hyperspectral imaging for guava fruit defects detection
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd. Zubir; Tan, Sou Ching
2017-06-01
Development of technology makes hyperspectral imaging commonly used for defect detection. In this research, a hyperspectral imaging system was setup in lab to target for guava fruits defect detection. Guava fruit was selected as the object as to our knowledge, there is fewer attempts were made for guava defect detection based on hyperspectral imaging. The common fluorescent light source was used to represent the uncontrolled lighting condition in lab and analysis was carried out in a specific wavelength range due to inefficiency of this particular light source. Based on the data, the reflectance intensity of this specific setup could be categorized in two groups. Sequential feature selection with linear discriminant (LD) and quadratic discriminant (QD) function were used to select features that could potentially be used in defects detection. Besides the ordinary training method, training dataset in discriminant was separated in two to cater for the uncontrolled lighting condition. These two parts were separated based on the brighter and dimmer area. Four evaluation matrixes were evaluated which are LD with common training method, QD with common training method, LD with two part training method and QD with two part training method. These evaluation matrixes were evaluated using F1-score with total 48 defected areas. Experiment shown that F1-score of linear discriminant with the compensated method hitting 0.8 score, which is the highest score among all.
Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming
2018-02-19
The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.
Silvoniemi, Antti; Din, Mueez U; Suilamo, Sami; Shepherd, Tony; Minn, Heikki
2016-11-01
Delineation of gross tumour volume in 3D is a critical step in the radiotherapy (RT) treatment planning for oropharyngeal cancer (OPC). Static [ 18 F]-FDG PET/CT imaging has been suggested as a method to improve the reproducibility of tumour delineation, but it suffers from low specificity. We undertook this pilot study in which dynamic features in time-activity curves (TACs) of [ 18 F]-FDG PET/CT images were applied to help the discrimination of tumour from inflammation and adjacent normal tissue. Five patients with OPC underwent dynamic [ 18 F]-FDG PET/CT imaging in treatment position. Voxel-by-voxel analysis was performed to evaluate seven dynamic features developed with the knowledge of differences in glucose metabolism in different tissue types and visual inspection of TACs. The Gaussian mixture model and K-means algorithms were used to evaluate the performance of the dynamic features in discriminating tumour voxels compared to the performance of standardized uptake values obtained from static imaging. Some dynamic features showed a trend towards discrimination of different metabolic areas but lack of consistency means that clinical application is not recommended based on these results alone. Impact of inflammatory tissue remains a problem for volume delineation in RT of OPC, but a simple dynamic imaging protocol proved practicable and enabled simple data analysis techniques that show promise for complementing the information in static uptake values.
Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P
2014-01-01
Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
[Use of blue and green systems of image visualization in roentgenology].
Riuduger, Iu G
2004-01-01
The main peculiarities of two image visualization systems related with the specificity of intensifying screens and of radiographic films in each of them are discussed. Specific features of kinetic development of modern orthochromatic general-purpose radiographic films were studied versus those of the traditional films; differences related with radiation hardness of some of the intensifying screen manufactured in Russia were investigated. Some practical advice was suggested on the basis of a conducted analysis of the "green" system specificity; such advice provides for reorienting the X-ray examination room, in Russia, for gadolinium screens and modern radiography films.
Li, Jiansen; Song, Ying; Zhu, Zhen; Zhao, Jun
2017-05-01
Dual-dictionary learning (Dual-DL) method utilizes both a low-resolution dictionary and a high-resolution dictionary, which are co-trained for sparse coding and image updating, respectively. It can effectively exploit a priori knowledge regarding the typical structures, specific features, and local details of training sets images. The prior knowledge helps to improve the reconstruction quality greatly. This method has been successfully applied in magnetic resonance (MR) image reconstruction. However, it relies heavily on the training sets, and dictionaries are fixed and nonadaptive. In this research, we improve Dual-DL by using self-adaptive dictionaries. The low- and high-resolution dictionaries are updated correspondingly along with the image updating stage to ensure their self-adaptivity. The updated dictionaries incorporate both the prior information of the training sets and the test image directly. Both dictionaries feature improved adaptability. Experimental results demonstrate that the proposed method can efficiently and significantly improve the quality and robustness of MR image reconstruction.
Acharya, U Rajendra; Sree, S Vinitha; Krishnan, M Muthu Rama; Molinari, Filippo; Zieleźnik, Witold; Bardales, Ricardo H; Witkowska, Agnieszka; Suri, Jasjit S
2014-02-01
Computer-aided diagnostic (CAD) techniques aid physicians in better diagnosis of diseases by extracting objective and accurate diagnostic information from medical data. Hashimoto thyroiditis is the most common type of inflammation of the thyroid gland. The inflammation changes the structure of the thyroid tissue, and these changes are reflected as echogenic changes on ultrasound images. In this work, we propose a novel CAD system (a class of systems called ThyroScan) that extracts textural features from a thyroid sonogram and uses them to aid in the detection of Hashimoto thyroiditis. In this paradigm, we extracted grayscale features based on stationary wavelet transform from 232 normal and 294 Hashimoto thyroiditis-affected thyroid ultrasound images obtained from a Polish population. Significant features were selected using a Student t test. The resulting feature vectors were used to build and evaluate the following 4 classifiers using a 10-fold stratified cross-validation technique: support vector machine, decision tree, fuzzy classifier, and K-nearest neighbor. Using 7 significant features that characterized the textural changes in the images, the fuzzy classifier had the highest classification accuracy of 84.6%, sensitivity of 82.8%, specificity of 87.0%, and a positive predictive value of 88.9%. The proposed ThyroScan CAD system uses novel features to noninvasively detect the presence of Hashimoto thyroiditis on ultrasound images. Compared to manual interpretations of ultrasound images, the CAD system offers a more objective interpretation of the nature of the thyroid. The preliminary results presented in this work indicate the possibility of using such a CAD system in a clinical setting after evaluating it with larger databases in multicenter clinical trials.
NASA Astrophysics Data System (ADS)
Rogers, L. D.; Valderrama Graff, P.; Bandfield, J. L.; Christensen, P. R.; Klug, S. L.; Deva, B.; Capages, C.
2007-12-01
The Mars Public Mapping Project is a web-based education and public outreach tool developed by the Mars Space Flight Facility at Arizona State University. This tool allows the general public to identify and map geologic features on Mars, utilizing Thermal Emission Imaging System (THEMIS) visible images, allowing public participation in authentic scientific research. In addition, participants are able to rate each image (based on a 1 to 5 star scale) to help build a catalog of some of the more appealing and interesting martian surface features. Once participants have identified observable features in an image, they are able to view a map of the global distribution of the many geologic features they just identified. This automatic feedback, through a global distribution map, allows participants to see how their answers compare to the answers of other participants. Participants check boxes "yes, no, or not sure" for each feature that is listed on the Mars Public Mapping Project web page, including surface geologic features such as gullies, sand dunes, dust devil tracks, wind streaks, lava flows, several types of craters, and layers. Each type of feature has a quick and easily accessible description and example image. When a participant moves their mouse over each example thumbnail image, a window pops up with a picture and a description of the feature. This provides a form of "on the job training" for the participants that can vary with their background level. For users who are more comfortable with Mars geology, there is also an advanced feature identification section accessible by a drop down menu. This includes additional features that may be identified, such as streamlined islands, valley networks, chaotic terrain, yardangs, and dark slope streaks. The Mars Public Mapping Project achieves several goals: 1) It engages the public in a manner that encourages active participation in scientific research and learning about geologic features and processes. 2) It helps to build a mappable database that can be used by researchers (and the public in general) to quickly access image based data that contains particular feature types. 3) It builds a searchable database of images containing specific geologic features that the public deem to be visually appealing. Other education and public outreach programs at the Mars Space Flight Facility, such as the Rock Around the World and the Mars Student Imaging Project, have shown an increase in demand for programs that allow "kids of all ages" to participate in authentic scientific research. The Mars Public Mapping Project is a broadly accessible program that continues this theme by building a set of activities that is useful for both the public and scientists.
Agner, Shannon C; Soman, Salil; Libfeld, Edward; McDonald, Margie; Thomas, Kathleen; Englander, Sarah; Rosen, Mark A; Chin, Deanna; Nosher, John; Madabhushi, Anant
2011-06-01
Dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) of the breast has emerged as an adjunct imaging tool to conventional X-ray mammography due to its high detection sensitivity. Despite the increasing use of breast DCE-MRI, specificity in distinguishing malignant from benign breast lesions is low, and interobserver variability in lesion classification is high. The novel contribution of this paper is in the definition of a new DCE-MRI descriptor that we call textural kinetics, which attempts to capture spatiotemporal changes in breast lesion texture in order to distinguish malignant from benign lesions. We qualitatively and quantitatively demonstrated on 41 breast DCE-MRI studies that textural kinetic features outperform signal intensity kinetics and lesion morphology features in distinguishing benign from malignant lesions. A probabilistic boosting tree (PBT) classifier in conjunction with textural kinetic descriptors yielded an accuracy of 90%, sensitivity of 95%, specificity of 82%, and an area under the curve (AUC) of 0.92. Graph embedding, used for qualitative visualization of a low-dimensional representation of the data, showed the best separation between benign and malignant lesions when using textural kinetic features. The PBT classifier results and trends were also corroborated via a support vector machine classifier which showed that textural kinetic features outperformed the morphological, static texture, and signal intensity kinetics descriptors. When textural kinetic attributes were combined with morphologic descriptors, the resulting PBT classifier yielded 89% accuracy, 99% sensitivity, 76% specificity, and an AUC of 0.91.
MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis.
Yang, Wanqi; Gao, Yang; Shi, Yinghuan; Cao, Longbing
2015-11-01
Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.
NASA Astrophysics Data System (ADS)
Cabrera Fernandez, Delia; Salinas, Harry M.; Somfai, Gabor; Puliafito, Carmen A.
2006-03-01
Optical coherence tomography (OCT) is a rapidly emerging medical imaging technology. In ophthalmology, OCT is a powerful tool because it enables visualization of the cross sectional structure of the retina and anterior eye with higher resolutions than any other non-invasive imaging modality. Furthermore, OCT image information can be quantitatively analyzed, enabling objective assessment of features such as macular edema and diabetes retinopathy. We present specific improvements in the quantitative analysis of the OCT system, by combining the diffusion equation with the free Shrödinger equation. In such formulation, important features of the image can be extracted by extending the analysis from the real axis to the complex domain. Experimental results indicate that our proposed novel approach has good performance in speckle noise removal, enhancement and segmentation of the various cellular layers of the retina using the OCT system.
Yao, Xinwen; Gan, Yu; Chang, Ernest; Hibshoosh, Hanina; Feldman, Sheldon; Hendon, Christine
2017-03-01
Breast cancer is one of the most common cancers, and recognized as the third leading cause of mortality in women. Optical coherence tomography (OCT) enables three dimensional visualization of biological tissue with micrometer level resolution at high speed, and can play an important role in early diagnosis and treatment guidance of breast cancer. In particular, ultra-high resolution (UHR) OCT provides images with better histological correlation. This paper compared UHR OCT performance with standard OCT in breast cancer imaging qualitatively and quantitatively. Automatic tissue classification algorithms were used to automatically detect invasive ductal carcinoma in ex vivo human breast tissue. Human breast tissues, including non-neoplastic/normal tissues from breast reduction and tumor samples from mastectomy specimens, were excised from patients at Columbia University Medical Center. The tissue specimens were imaged by two spectral domain OCT systems at different wavelengths: a home-built ultra-high resolution (UHR) OCT system at 800 nm (measured as 2.72 μm axial and 5.52 μm lateral) and a commercial OCT system at 1,300 nm with standard resolution (measured as 6.5 μm axial and 15 μm lateral), and their imaging performances were analyzed qualitatively. Using regional features derived from OCT images produced by the two systems, we developed an automated classification algorithm based on relevance vector machine (RVM) to differentiate hollow-structured adipose tissue against solid tissue. We further developed B-scan based features for RVM to classify invasive ductal carcinoma (IDC) against normal fibrous stroma tissue among OCT datasets produced by the two systems. For adipose classification, 32 UHR OCT B-scans from 9 normal specimens, and 28 standard OCT B-scans from 6 normal and 4 IDC specimens were employed. For IDC classification, 152 UHR OCT B-scans from 6 normal and 13 IDC specimens, and 104 standard OCT B-scans from 5 normal and 8 IDC specimens were employed. We have demonstrated that UHR OCT images can produce images with better feature delineation compared with images produced by 1,300 nm OCT system. UHR OCT images of a variety of tissue types found in human breast tissue were presented. With a limited number of datasets, we showed that both OCT systems can achieve a good accuracy in identifying adipose tissue. Classification in UHR OCT images achieved higher sensitivity (94%) and specificity (93%) of adipose tissue than the sensitivity (91%) and specificity (76%) in 1,300 nm OCT images. In IDC classification, similarly, we achieved better results with UHR OCT images, featured an overall accuracy of 84%, sensitivity of 89% and specificity of 71% in this preliminary study. In this study, we provided UHR OCT images of different normal and malignant breast tissue types, and qualitatively and quantitatively studied the texture and optical features from OCT images of human breast tissue at different resolutions. We developed an automated approach to differentiate adipose tissue, fibrous stroma, and IDC within human breast tissues. Our work may open the door toward automatic intraoperative OCT evaluation of early-stage breast cancer. Lasers Surg. Med. 49:258-269, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Gender classification under extended operating conditions
NASA Astrophysics Data System (ADS)
Rude, Howard N.; Rizki, Mateen
2014-06-01
Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.
Optical imaging probes in oncology
Martelli, Cristina; Dico, Alessia Lo; Diceglie, Cecilia; Lucignani, Giovanni; Ottobrini, Luisa
2016-01-01
Cancer is a complex disease, characterized by alteration of different physiological molecular processes and cellular features. Keeping this in mind, the possibility of early identification and detection of specific tumor biomarkers by non-invasive approaches could improve early diagnosis and patient management. Different molecular imaging procedures provide powerful tools for detection and non-invasive characterization of oncological lesions. Clinical studies are mainly based on the use of computed tomography, nuclear-based imaging techniques and magnetic resonance imaging. Preclinical imaging in small animal models entails the use of dedicated instruments, and beyond the already cited imaging techniques, it includes also optical imaging studies. Optical imaging strategies are based on the use of luminescent or fluorescent reporter genes or injectable fluorescent or luminescent probes that provide the possibility to study tumor features even by means of fluorescence and luminescence imaging. Currently, most of these probes are used only in animal models, but the possibility of applying some of them also in the clinics is under evaluation. The importance of tumor imaging, the ease of use of optical imaging instruments, the commercial availability of a wide range of probes as well as the continuous description of newly developed probes, demonstrate the significance of these applications. The aim of this review is providing a complete description of the possible optical imaging procedures available for the non-invasive assessment of tumor features in oncological murine models. In particular, the characteristics of both commercially available and newly developed probes will be outlined and discussed. PMID:27145373
Manchester visual query language
NASA Astrophysics Data System (ADS)
Oakley, John P.; Davis, Darryl N.; Shann, Richard T.
1993-04-01
We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.
Karnowski, T P; Aykac, D; Giancardo, L; Li, Y; Nichols, T; Tobin, K W; Chaum, E
2011-01-01
The automated detection of diabetic retinopathy and other eye diseases in images of the retina has great promise as a low-cost method for broad-based screening. Many systems in the literature which perform automated detection include a quality estimation step and physiological feature detection, including the vascular tree and the optic nerve / macula location. In this work, we study the robustness of an automated disease detection method with respect to the accuracy of the optic nerve location and the quality of the images obtained as judged by a quality estimation algorithm. The detection algorithm features microaneurysm and exudate detection followed by feature extraction on the detected population to describe the overall retina image. Labeled images of retinas ground-truthed to disease states are used to train a supervised learning algorithm to identify the disease state of the retina image and exam set. Under the restrictions of high confidence optic nerve detections and good quality imagery, the system achieves a sensitivity and specificity of 94.8% and 78.7% with area-under-curve of 95.3%. Analysis of the effect of constraining quality and the distinction between mild non-proliferative diabetic retinopathy, normal retina images, and more severe disease states is included.
Evaluation of deformable image registration and a motion model in CT images with limited features.
Liu, F; Hu, Y; Zhang, Q; Kincaid, R; Goodman, K A; Mageras, G S
2012-05-07
Deformable image registration (DIR) is increasingly used in radiotherapy applications and provides the basis for a previously described model of patient-specific respiratory motion. We examine the accuracy of a DIR algorithm and a motion model with respiration-correlated CT (RCCT) images of software phantom with known displacement fields, physical deformable abdominal phantom with implanted fiducials in the liver and small liver structures in patient images. The motion model is derived from a principal component analysis that relates volumetric deformations with the motion of the diaphragm or fiducials in the RCCT. Patient data analysis compares DIR with rigid registration as ground truth: the mean ± standard deviation 3D discrepancy of liver structure centroid positions is 2.0 ± 2.2 mm. DIR discrepancy in the software phantom is 3.8 ± 2.0 mm in lung and 3.7 ± 1.8 mm in abdomen; discrepancies near the chest wall are larger than indicated by image feature matching. Marker's 3D discrepancy in the physical phantom is 3.6 ± 2.8 mm. The results indicate that visible features in the images are important for guiding the DIR algorithm. Motion model accuracy is comparable to DIR, indicating that two principal components are sufficient to describe DIR-derived deformation in these datasets.
Multi Texture Analysis of Colorectal Cancer Continuum Using Multispectral Imagery
Chaddad, Ahmad; Desrosiers, Christian; Bouridane, Ahmed; Toews, Matthew; Hassan, Lama; Tanougast, Camel
2016-01-01
Purpose This paper proposes to characterize the continuum of colorectal cancer (CRC) using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT) are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma. Materials and Methods In the proposed approach, the region of interest containing PT is first extracted from multispectral images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG) filter, discrete wavelets (DW) and gray level co-occurrence matrices (GLCM). To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models. Results Preliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01). Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%. Conclusions These results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images. PMID:26901134
3D space positioning and image feature extraction for workpiece
NASA Astrophysics Data System (ADS)
Ye, Bing; Hu, Yi
2008-03-01
An optical system of 3D parameters measurement for specific area of a workpiece has been presented and discussed in this paper. A number of the CCD image sensors are employed to construct the 3D coordinate system for the measured area. The CCD image sensor of the monitoring target is used to lock the measured workpiece when it enters the field of view. The other sensors, which are placed symmetrically beam scanners, measure the appearance of the workpiece and the characteristic parameters. The paper established target image segmentation and the image feature extraction algorithm to lock the target, based on the geometric similarity of objective characteristics, rapid locking the goal can be realized. When line laser beam scan the tested workpiece, a number of images are extracted equal time interval and the overlapping images are processed to complete image reconstruction, and achieve the 3D image information. From the 3D coordinate reconstruction model, the 3D characteristic parameters of the tested workpiece are gained. The experimental results are provided in the paper.
Simultenious binary hash and features learning for image retrieval
NASA Astrophysics Data System (ADS)
Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.
2016-05-01
Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.
The potential of multiparametric MRI of the breast
Pinker, Katja; Helbich, Thomas H
2017-01-01
MRI is an essential tool in breast imaging, with multiple established indications. Dynamic contrast-enhanced MRI (DCE-MRI) is the backbone of any breast MRI protocol and has an excellent sensitivity and good specificity for breast cancer diagnosis. DCE-MRI provides high-resolution morphological information, as well as some functional information about neoangiogenesis as a tumour-specific feature. To overcome limitations in specificity, several other functional MRI parameters have been investigated and the application of these combined parameters is defined as multiparametric MRI (mpMRI) of the breast. MpMRI of the breast can be performed at different field strengths (1.5–7 T) and includes both established (diffusion-weighted imaging, MR spectroscopic imaging) and novel MRI parameters (sodium imaging, chemical exchange saturation transfer imaging, blood oxygen level-dependent MRI), as well as hybrid imaging with positron emission tomography (PET)/MRI and different radiotracers. Available data suggest that multiparametric imaging using different functional MRI and PET parameters can provide detailed information about the underlying oncogenic processes of cancer development and progression and can provide additional specificity. This article will review the current and emerging functional parameters for mpMRI of the breast for improved diagnostic accuracy in breast cancer. PMID:27805423
Computer-assisted diagnosis of melanoma.
Fuller, Collin; Cellura, A Paul; Hibler, Brian P; Burris, Katy
2016-03-01
The computer-assisted diagnosis of melanoma is an exciting area of research where imaging techniques are combined with diagnostic algorithms in an attempt to improve detection and outcomes for patients with skin lesions suspicious for malignancy. Once an image has been acquired, it undergoes a processing pathway which includes preprocessing, enhancement, segmentation, feature extraction, feature selection, change detection, and ultimately classification. Practicality for everyday clinical use remains a vital question. A successful model must obtain results that are on par or outperform experienced dermatologists, keep costs at a minimum, be user-friendly, and be time efficient with high sensitivity and specificity. ©2015 Frontline Medical Communications.
Computer-aided interpretation approach for optical tomographic images
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.
2010-11-01
A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.
NASA Astrophysics Data System (ADS)
Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.
2016-09-01
There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86 ± 0.03 for pediatric patient protocols, and 0.85 ± 0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82-0.98), specificity of 0.89 (range: 0.70-0.98), and accuracy of 0.90 (range: 0.76-0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus and scan parameters.
Akin, Oguz; Franiel, Tobias; Goldman, Debra A.; Udo, Kazuma; Touijer, Karim A.; Reuter, Victor E.; Hricak, Hedvig
2012-01-01
Purpose: To describe the anatomic features of the central zone of the prostate on T2-weighted and diffusion-weighted (DW) magnetic resonance (MR) images and evaluate the diagnostic performance of MR imaging in detection of central zone involvement by prostate cancer. Materials and Methods: The institutional review board waived informed consent and approved this retrospective, HIPAA-compliant study of 211 patients who underwent T2-weighted and DW MR imaging of the prostate before radical prostatectomy. Whole-mount step-section pathologic findings were the reference standard. Two radiologists independently recorded the visibility, MR signal intensity, size, and symmetry of the central zone and scored the likelihood of central zone involvement by cancer on T2-weighted MR images and on T2-weighted MR images plus apparent diffusion coefficient (ADC) maps generated from the DW MR images. Descriptive summary statistics were calculated for central zone imaging features. Sensitivity, specificity, and area under the curve were used to evaluate reader performance in detecting central zone involvement. Results: For readers 1 and 2, the central zone was visible, at least partially, in 177 (84%) and 170 (81%) of 211 patients, respectively. The most common imaging appearance of the central zone was symmetric, homogeneous low signal intensity. Cancers involving the central zone had higher prostate-specific antigen values, Gleason scores, and rates of extracapsular extension and seminal vesicle invasion compared with cancers not involving the central zone (P < .05). Area under the curve, sensitivity, and specificity in detecting central zone involvement were 0.70, 0.30, and 0.96 for reader 1 and 0.65, 0.35, and 0.93 for reader 2, and these values did not differ significantly between T2-weighted imaging and T2-weighted imaging plus ADC maps. Conclusion: The central zone was visualized in most patients. Cancers involving the central zone were associated with more aggressive disease than those without central zone involvement. © RSNA, 2012 PMID:22357889
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
An update of commercial infrared sensing and imaging instruments
NASA Technical Reports Server (NTRS)
Kaplan, Herbert
1989-01-01
A classification of infrared sensing instruments by type and application, listing commercially available instruments, from single point thermal probes to on-line control sensors, to high speed, high resolution imaging systems is given. A review of performance specifications follows, along with a discussion of typical thermographic display approaches utilized by various imager manufacturers. An update report on new instruments, new display techniques and newly introduced features of existing instruments is given.
Pérez-Beteta, Julián; Martínez-González, Alicia; Martino, Juan; Velasquez, Carlos; Arana, Estanislao; Pérez-García, Víctor M.
2017-01-01
Purpose Textural measures have been widely explored as imaging biomarkers in cancer. However, their robustness under dynamic range and spatial resolution changes in brain 3D magnetic resonance images (MRI) has not been assessed. The aim of this work was to study potential variations of textural measures due to changes in MRI protocols. Materials and methods Twenty patients harboring glioblastoma with pretreatment 3D T1-weighted MRIs were included in the study. Four different spatial resolution combinations and three dynamic ranges were studied for each patient. Sixteen three-dimensional textural heterogeneity measures were computed for each patient and configuration including co-occurrence matrices (CM) features and run-length matrices (RLM) features. The coefficient of variation was used to assess the robustness of the measures in two series of experiments corresponding to (i) changing the dynamic range and (ii) changing the matrix size. Results No textural measures were robust under dynamic range changes. Entropy was the only textural feature robust under spatial resolution changes (coefficient of variation under 10% in all cases). Conclusion Textural measures of three-dimensional brain tumor images are not robust neither under dynamic range nor under matrix size changes. Standards should be harmonized to use textural features as imaging biomarkers in radiomic-based studies. The implications of this work go beyond the specific tumor type studied here and pose the need for standardization in textural feature calculation of oncological images. PMID:28586353
Integrated circuit layer image segmentation
NASA Astrophysics Data System (ADS)
Masalskis, Giedrius; Petrauskas, Romas
2010-09-01
In this paper we present IC layer image segmentation techniques which are specifically created for precise metal layer feature extraction. During our research we used many samples of real-life de-processed IC metal layer images which were obtained using optical light microscope. We have created sequence of various image processing filters which provides segmentation results of good enough precision for our application. Filter sequences were fine tuned to provide best possible results depending on properties of IC manufacturing process and imaging technology. Proposed IC image segmentation filter sequences were experimentally tested and compared with conventional direct segmentation algorithms.
Xu, Yingying; Lin, Lanfen; Hu, Hongjie; Wang, Dan; Zhu, Wenchao; Wang, Jian; Han, Xian-Hua; Chen, Yen-Wei
2018-01-01
The bag of visual words (BoVW) model is a powerful tool for feature representation that can integrate various handcrafted features like intensity, texture, and spatial information. In this paper, we propose a novel BoVW-based method that incorporates texture and spatial information for the content-based image retrieval to assist radiologists in clinical diagnosis. This paper presents a texture-specific BoVW method to represent focal liver lesions (FLLs). Pixels in the region of interest (ROI) are classified into nine texture categories using the rotation-invariant uniform local binary pattern method. The BoVW-based features are calculated for each texture category. In addition, a spatial cone matching (SCM)-based representation strategy is proposed to describe the spatial information of the visual words in the ROI. In a pilot study, eight radiologists with different clinical experience performed diagnoses for 20 cases with and without the top six retrieved results. A total of 132 multiphase computed tomography volumes including five pathological types were collected. The texture-specific BoVW was compared to other BoVW-based methods using the constructed dataset of FLLs. The results show that our proposed model outperforms the other three BoVW methods in discriminating different lesions. The SCM method, which adds spatial information to the orderless BoVW model, impacted the retrieval performance. In the pilot trial, the average diagnosis accuracy of the radiologists was improved from 66 to 80% using the retrieval system. The preliminary results indicate that the texture-specific features and the SCM-based BoVW features can effectively characterize various liver lesions. The retrieval system has the potential to improve the diagnostic accuracy and the confidence of the radiologists.
Functional MRI registration with tissue-specific patch-based functional correlation tensors.
Zhou, Yujia; Zhang, Han; Zhang, Lichi; Cao, Xiaohuan; Yang, Ru; Feng, Qianjin; Yap, Pew-Thian; Shen, Dinggang
2018-06-01
Population studies of brain function with resting-state functional magnetic resonance imaging (rs-fMRI) rely on accurate intersubject registration of functional areas. This is typically achieved through registration using high-resolution structural images with more spatial details and better tissue contrast. However, accumulating evidence has suggested that such strategy cannot align functional regions well because functional areas are not necessarily consistent with anatomical structures. To alleviate this problem, a number of registration algorithms based directly on rs-fMRI data have been developed, most of which utilize functional connectivity (FC) features for registration. However, most of these methods usually extract functional features only from the thin and highly curved cortical grey matter (GM), posing great challenges to accurate estimation of whole-brain deformation fields. In this article, we demonstrate that additional useful functional features can also be extracted from the whole brain, not restricted to the GM, particularly the white-matter (WM), for improving the overall functional registration. Specifically, we quantify local anisotropic correlation patterns of the blood oxygenation level-dependent (BOLD) signals using tissue-specific patch-based functional correlation tensors (ts-PFCTs) in both GM and WM. Functional registration is then performed by integrating the features from different tissues using the multi-channel large deformation diffeomorphic metric mapping (mLDDMM) algorithm. Experimental results show that our method achieves superior functional registration performance, compared with conventional registration methods. © 2018 Wiley Periodicals, Inc.
TU-G-201-00: Imaging Equipment Specification and Selection in Radiation Oncology Departments
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This session will update therapeutic physicists on technological advancements and radiation oncology features of commercial CT, MRI, and PET/CT imaging systems. Also described are physicists’ roles in every stage of equipment selection, purchasing, and operation, including defining specifications, evaluating vendors, making recommendations, and optimal and safe use of imaging equipment in radiation oncology environment. The first presentation defines important terminology of CT and PET/CT followed by a review of latest innovations, such as metal artifact reduction, statistical iterative reconstruction, radiation dose management, tissue classification by dual energy CT and spectral CT, improvement in spatial resolution and sensitivity in PET, andmore » potentials of PET/MR. We will also discuss important technical specifications and items in CT and PET/CT purchasing quotes and their impacts. The second presentation will focus on key components in the request for proposal for a MRI simulator and how to evaluate vendor proposals. MRI safety issues in radiation Oncology, including MRI scanner Zones (4-zone design), will be discussed. Basic MR terminologies, important functionalities, and advanced features, which are relevant to radiation therapy, will be discussed. In the third presentation, justification of imaging systems for radiation oncology, considerations in room design and construction in a RO department, shared use with diagnostic radiology, staffing needs and training, clinical/research use cases and implementation, will be discussed. The emphasis will be on understanding and bridging the differences between diagnostic and radiation oncology installations, building consensus amongst stakeholders for purchase and use, and integrating imaging technologies into the radiation oncology environment. Learning Objectives: Learn the latest innovations of major imaging systems relevant to radiation therapy Be able to describe important technical specifications of CT, MRI, and PET/CT Understand the process of budget request, equipment justification, comparisons of technical specifications, site visits, vendor selection, and contract development.« less
NASA Astrophysics Data System (ADS)
Bramhe, V. S.; Ghosh, S. K.; Garg, P. K.
2018-04-01
With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust and abstract is a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent more abstract aspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31 % and 82.86 % overall accuracies respectively. Inception-v3 and VGGNet gives 89.43 % of overall accuracy using fine-tuned VGGNet and 92.10 % when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.
Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Jiang, Yuan Yuan; Kim, Sung Min
2015-01-01
This paper focuses on the improvement of the diagnostic accuracy of focal liver lesions by quantifying the key features of cysts, hemangiomas, and malignant lesions on ultrasound images. The focal liver lesions were divided into 29 cysts, 37 hemangiomas, and 33 malignancies. A total of 42 hybrid textural features that composed of 5 first order statistics, 18 gray level co-occurrence matrices, 18 Law's, and echogenicity were extracted. A total of 29 key features that were selected by principal component analysis were used as a set of inputs for a feed-forward neural network. For each lesion, the performance of the diagnosis was evaluated by using the positive predictive value, negative predictive value, sensitivity, specificity, and accuracy. The results of the experiment indicate that the proposed method exhibits great performance, a high diagnosis accuracy of over 96% among all focal liver lesion groups (cyst vs. hemangioma, cyst vs. malignant, and hemangioma vs. malignant) on ultrasound images. The accuracy was slightly increased when echogenicity was included in the optimal feature set. These results indicate that it is possible for the proposed method to be applied clinically.
A neuromorphic approach to satellite image understanding
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Perakakis, Manolis
2014-05-01
Remote sensing satellite imagery provides high altitude, top viewing aspects of large geographic regions and as such the depicted features are not always easily recognizable. Nevertheless, geoscientists familiar to remote sensing data, gradually gain experience and enhance their satellite image interpretation skills. The aim of this study is to devise a novel computational neuro-centered classification approach for feature extraction and image understanding. Object recognition through image processing practices is related to a series of known image/feature based attributes including size, shape, association, texture, etc. The objective of the study is to weight these attribute values towards the enhancement of feature recognition. The key cognitive experimentation concern is to define the point when a user recognizes a feature as it varies in terms of the above mentioned attributes and relate it with their corresponding values. Towards this end, we have set up an experimentation methodology that utilizes cognitive data from brain signals (EEG) and eye gaze data (eye tracking) of subjects watching satellite images of varying attributes; this allows the collection of rich real-time data that will be used for designing the image classifier. Since the data are already labeled by users (using an input device) a first step is to compare the performance of various machine-learning algorithms on the collected data. On the long-run, the aim of this work would be to investigate the automatic classification of unlabeled images (unsupervised learning) based purely on image attributes. The outcome of this innovative process is twofold: First, in an abundance of remote sensing image datasets we may define the essential image specifications in order to collect the appropriate data for each application and improve processing and resource efficiency. E.g. for a fault extraction application in a given scale a medium resolution 4-band image, may be more effective than costly, multispectral, very high resolution imagery. Second, we attempt to relate the experienced against the non-experienced user understanding in order to indirectly assess the possible limits of purely computational systems. In other words, obtain the conceptual limits of computation vs human cognition concerning feature recognition from satellite imagery. Preliminary results of this pilot study show relations between collected data and differentiation of the image attributes which indicates that our methodology can lead to important results.
A contour-based shape descriptor for biomedical image classification and retrieval
NASA Astrophysics Data System (ADS)
You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-12-01
Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.
On computer vision in wireless sensor networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Ko, Teresa H.
Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an imagemore » capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.« less
Rahman, Mahabubur; Watabe, Hiroshi
2018-05-01
Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Accuracy of Ultrasonography and Magnetic Resonance Imaging in the Diagnosis of Placenta Accreta
Riteau, Anne-Sophie; Tassin, Mikael; Chambon, Guillemette; Le Vaillant, Claudine; de Laveaucoupet, Jocelyne; Quéré, Marie-Pierre; Joubert, Madeleine; Prevot, Sophie; Philippe, Henri-Jean; Benachi, Alexandra
2014-01-01
Purpose To evaluate the accuracy of ultrasonography and magnetic resonance imaging (MRI) in the diagnosis of placenta accreta and to define the most relevant specific ultrasound and MRI features that may predict placental invasion. Material and Methods This study was approved by the institutional review board of the French College of Obstetricians and Gynecologists. We retrospectively reviewed the medical records of all patients referred for suspected placenta accreta to two university hospitals from 01/2001 to 05/2012. Our study population included 42 pregnant women who had been investigated by both ultrasonography and MRI. Ultrasound images and MRI were blindly reassessed for each case by 2 raters in order to score features that predict abnormal placental invasion. Results Sensitivity in the diagnosis of placenta accreta was 100% with ultrasound and 76.9% for MRI (P = 0.03). Specificity was 37.5% with ultrasonography and 50% for MRI (P = 0.6). The features of greatest sensitivity on ultrasonography were intraplacental lacunae and loss of the normal retroplacental clear space. Increased vascularization in the uterine serosa-bladder wall interface and vascularization perpendicular to the uterine wall had the best positive predictive value (92%). At MRI, uterine bulging had the best positive predictive value (85%) and its combination with the presence of dark intraplacental bands on T2-weighted images improved the predictive value to 90%. Conclusion Ultrasound imaging is the mainstay of screening for placenta accreta. MRI appears to be complementary to ultrasonography, especially when there are few ultrasound signs. PMID:24733409
Wang, Lei; Pedersen, Peder C; Agu, Emmanuel; Strong, Diane M; Tulu, Bengisu
2017-09-01
The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.
Parts-based stereoscopic image assessment by learning binocular manifold color visual properties
NASA Astrophysics Data System (ADS)
Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi
2016-11-01
Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.
NASA Astrophysics Data System (ADS)
Li, S.; Zhang, S.; Yang, D.
2017-09-01
Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Thermographic image analysis as a pre-screening tool for the detection of canine bone cancer
NASA Astrophysics Data System (ADS)
Subedi, Samrat; Umbaugh, Scott E.; Fu, Jiyuan; Marino, Dominic J.; Loughin, Catherine A.; Sackman, Joseph
2014-09-01
Canine bone cancer is a common type of cancer that grows fast and may be fatal. It usually appears in the limbs which is called "appendicular bone cancer." Diagnostic imaging methods such as X-rays, computed tomography (CT scan), and magnetic resonance imaging (MRI) are more common methods in bone cancer detection than invasive physical examination such as biopsy. These imaging methods have some disadvantages; including high expense, high dose of radiation, and keeping the patient (canine) motionless during the imaging procedures. This project study identifies the possibility of using thermographic images as a pre-screening tool for diagnosis of bone cancer in dogs. Experiments were performed with thermographic images from 40 dogs exhibiting the disease bone cancer. Experiments were performed with color normalization using temperature data provided by the Long Island Veterinary Specialists. The images were first divided into four groups according to body parts (Elbow/Knee, Full Limb, Shoulder/Hip and Wrist). Each of the groups was then further divided into three sub-groups according to views (Anterior, Lateral and Posterior). Thermographic pattern of normal and abnormal dogs were analyzed using feature extraction and pattern classification tools. Texture features, spectral feature and histogram features were extracted from the thermograms and were used for pattern classification. The best classification success rate in canine bone cancer detection is 90% with sensitivity of 100% and specificity of 80% produced by anterior view of full-limb region with nearest neighbor classification method and normRGB-lum color normalization method. Our results show that it is possible to use thermographic imaging as a pre-screening tool for detection of canine bone cancer.
NASA Astrophysics Data System (ADS)
Bachche, Shivaji; Oka, Koichi
2013-06-01
This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
Advanced imaging techniques for the study of plant growth and development.
Sozzani, Rosangela; Busch, Wolfgang; Spalding, Edgar P; Benfey, Philip N
2014-05-01
A variety of imaging methodologies are being used to collect data for quantitative studies of plant growth and development from living plants. Multi-level data, from macroscopic to molecular, and from weeks to seconds, can be acquired. Furthermore, advances in parallelized and automated image acquisition enable the throughput to capture images from large populations of plants under specific growth conditions. Image-processing capabilities allow for 3D or 4D reconstruction of image data and automated quantification of biological features. These advances facilitate the integration of imaging data with genome-wide molecular data to enable systems-level modeling. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Scott, Richard; Khan, Faisal M.; Zeineh, Jack; Donovan, Michael; Fernandez, Gerardo
2015-03-01
Immunofluorescent (IF) image analysis of tissue pathology has proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. However, while biomarker and glandular morphometric features have been combined as separate predictors in multivariate models, there is a lack of integrative features for biomarkers co-localized within specific morphological sub-types; for example the evaluation of androgen receptor (AR) expression within Gleason 3 glands only. In this work we propose a novel framework employing multiple techniques to generate integrated metrics of morphology and biomarker expression. We demonstrate the utility of the approaches in predicting clinical disease progression in images from 326 prostate biopsies and 373 prostatectomies. Our proposed integrative approaches yield significant improvements over existing IF image feature metrics. This work presents some of the first algorithms for generating innovative characteristics in tissue diagnostics that integrate co-localized morphometry and protein biomarker expression.
Learning optimal features for visual pattern recognition
NASA Astrophysics Data System (ADS)
Labusch, Kai; Siewert, Udo; Martinetz, Thomas; Barth, Erhardt
2007-02-01
The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation, and statistical independence are related to each other (ii) propose to evaluate the different models by verifiable performance measures (iii) analyse the classification performance on images of handwritten digits (MNIST data base). We first employ the SPARSENET algorithm (Olshausen, 1998) to derive a local filter basis (on 13 × 13 pixels windows). We then filter the images in the database (28 × 28 pixels images of digits) and reduce the dimensionality of the resulting feature space by selecting the locally maximal filter responses. We then train a support vector machine on a training set to classify the digits and report results obtained on a separate test set. Currently, the best state-of-the-art result on the MNIST data base has an error rate of 0,4%. This result, however, has been obtained by using explicit knowledge that is specific to the data (elastic distortion model for digits). We here obtain an error rate of 0,55% which is second best but does not use explicit data specific knowledge. In particular it outperforms by far all methods that do not use data-specific knowledge.
NASA Technical Reports Server (NTRS)
Belton, M. J. S.; Aksnes, K.; Davies, M. E.; Hartmann, W. K.; Millis, R. L.; Owen, T. C.; Reilly, T. H.; Sagan, C.; Suomi, V. E.; Collins, S. A., Jr.
1972-01-01
A recommended imaging system is outlined for use aboard the Outer Planet Grand Tour Explorer. The system features the high angular resolution capacity necessary to accommodate large encounter distances, and to satisfy the demand for a reasonable amount of time coverage. Specifications for all components within the system are provided in detail.
Mane, Vijay Mahadeo; Jadhav, D V
2017-05-24
Diabetic retinopathy (DR) is the most common diabetic eye disease. Doctors are using various test methods to detect DR. But, the availability of test methods and requirements of domain experts pose a new challenge in the automatic detection of DR. In order to fulfill this objective, a variety of algorithms has been developed in the literature. In this paper, we propose a system consisting of a novel sparking process and a holoentropy-based decision tree for automatic classification of DR images to further improve the effectiveness. The sparking process algorithm is developed for automatic segmentation of blood vessels through the estimation of optimal threshold. The holoentropy enabled decision tree is newly developed for automatic classification of retinal images into normal or abnormal using hybrid features which preserve the disease-level patterns even more than the signal level of the feature. The effectiveness of the proposed system is analyzed using standard fundus image databases DIARETDB0 and DIARETDB1 for sensitivity, specificity and accuracy. The proposed system yields sensitivity, specificity and accuracy values of 96.72%, 97.01% and 96.45%, respectively. The experimental result reveals that the proposed technique outperforms the existing algorithms.
NASA Astrophysics Data System (ADS)
Zhuo, Shuangmu; Yan, Jie; Kang, Yuzhan; Xu, Shuoyu; Peng, Qiwen; So, Peter T. C.; Yu, Hanry
2014-07-01
Various structural features on the liver surface reflect functional changes in the liver. The visualization of these surface features with molecular specificity is of particular relevance to understanding the physiology and diseases of the liver. Using multi-photon microscopy (MPM), we have developed a label-free, three-dimensional quantitative and sensitive method to visualize various structural features of liver surface in living rat. MPM could quantitatively image the microstructural features of liver surface with respect to the sinuosity of collagen fiber, the elastic fiber structure, the ratio between elastin and collagen, collagen content, and the metabolic state of the hepatocytes that are correlative with the pathophysiologically induced changes in the regions of interest. This study highlights the potential of this technique as a useful tool for pathophysiological studies and possible diagnosis of the liver diseases with further development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuo, Shuangmu, E-mail: shuangmuzhuo@gmail.com, E-mail: hanry-yu@nuhs.edu.sg; Institute of Laser and Optoelectronics Technology, Fujian Normal University, Fuzhou 350007; Yan, Jie
2014-07-14
Various structural features on the liver surface reflect functional changes in the liver. The visualization of these surface features with molecular specificity is of particular relevance to understanding the physiology and diseases of the liver. Using multi-photon microscopy (MPM), we have developed a label-free, three-dimensional quantitative and sensitive method to visualize various structural features of liver surface in living rat. MPM could quantitatively image the microstructural features of liver surface with respect to the sinuosity of collagen fiber, the elastic fiber structure, the ratio between elastin and collagen, collagen content, and the metabolic state of the hepatocytes that are correlativemore » with the pathophysiologically induced changes in the regions of interest. This study highlights the potential of this technique as a useful tool for pathophysiological studies and possible diagnosis of the liver diseases with further development.« less
Fabrication and optical characterization of imaging fiber-based nanoarrays.
Tam, Jenny M; Song, Linan; Walt, David R
2005-09-15
In this paper, we present a technique for fabricating arrays containing a density at least 90 times higher than previously published. Specifically, we discuss the fabrication of two imaging fiber-based nanoarrays, one with 700nm features, another with 300nm features. With arrays containing up to 4.5x10(6) array elements/mm(2), these nanoarrays have an ultra-high packing density. A straightforward etching protocol is used to create nanowells into which beads can be deposited. These beads comprise the sensing elements of the nanoarray. Deposition of the nanobeads into the nanowells using two techniques is described. The surface characteristics of the etched arrays are examined with atomic force microscopy and scanning electron microscopy. Fluorescence microscopy was used to observe the arrays. The 300nm array features and the 500nm center-to-center distance approach the minimum feature sizes viewable using conventional light microscopy.
McDermott, Edel; Mullen, Georgina; Moloney, Jenny; Keegan, Denise; Byrne, Kathryn; Doherty, Glen A; Cullen, Garret; Malone, Kevin; Mulcahy, Hugh E
2015-02-01
Body image refers to a person's sense of their physical appearance and body function. A negative body image self-evaluation may result in psychosocial dysfunction. Crohn's disease and ulcerative colitis are associated with disabling features, and body image dissatisfaction is a concern for many patients with inflammatory bowel disease (IBD). However, no study has assessed body image and its comorbidities in patients with IBD using validated instruments. Our aim was to explore body image dissatisfaction in patients with IBD and assess its relationship with biological and psychosocial variables. We studied 330 patients (median age, 36 yr; range, 18-83; 169 men) using quantitative and qualitative methods. Patients completed a self-administered questionnaire that included a modified Hopwood Body Image Scale, the Cash Body Image Disturbance Questionnaire, and other validated instruments. Clinical and disease activity data were also collected. Body image dissatisfaction was associated with disease activity (P < 0.001) and steroid treatment (P = 0.03) but not with immunotherapy (P = 0.57) or biological (P = 0.55) therapy. Body image dissatisfaction was also associated with low levels of general (P < 0.001) and IBD-specific (P < 0.001) quality of life, self-esteem (P < 0.001), and sexual satisfaction (P < 0.001), and with high levels of anxiety (P < 0.001) and depression (P < 0.001). Qualitative analysis indicated that patients were concerned about both physical and psychosocial consequences of body image dissatisfaction, including steroid side effects and impaired work and social activities. Body image dissatisfaction is common in patients with IBD, relates to specific clinical variables and is associated with significant psychological dysfunction. Its measurement is warranted as part of a comprehensive patient-centered IBD assessment.
Marečková, Klára; Weinbrand, Zohar; Chakravarty, M Mallar; Lawrence, Claire; Aleong, Rosanne; Leonard, Gabriel; Perron, Michel; Pike, G Bruce; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš
2011-11-01
Sex identification of a face is essential for social cognition. Still, perceptual cues indicating the sex of a face, and mechanisms underlying their development, remain poorly understood. Previously, our group described objective age- and sex-related differences in faces of healthy male and female adolescents (12-18 years of age), as derived from magnetic resonance images (MRIs) of the adolescents' heads. In this study, we presented these adolescent faces to 60 female raters to determine which facial features most reliably predicted subjective sex identification. Identification accuracy correlated highly with specific MRI-derived facial features (e.g. broader forehead, chin, jaw, and nose). Facial features that most reliably cued male identity were associated with plasma levels of testosterone (above and beyond age). Perceptible sex differences in face shape are thus associated with specific facial features whose emergence may be, in part, driven by testosterone. Copyright © 2011 Elsevier Inc. All rights reserved.
Automatic classification of endoscopic images for premalignant conditions of the esophagus
NASA Astrophysics Data System (ADS)
Boschetto, Davide; Gambaretto, Gloria; Grisan, Enrico
2016-03-01
Barrett's esophagus (BE) is a precancerous complication of gastroesophageal reflux disease in which normal stratified squamous epithelium lining the esophagus is replaced by intestinal metaplastic columnar epithelium. Repeated endoscopies and multiple biopsies are often necessary to establish the presence of intestinal metaplasia. Narrow Band Imaging (NBI) is an imaging technique commonly used with endoscopies that enhances the contrast of vascular pattern on the mucosa. We present a computer-based method for the automatic normal/metaplastic classification of endoscopic NBI images. Superpixel segmentation is used to identify and cluster pixels belonging to uniform regions. From each uniform clustered region of pixels, eight features maximizing differences among normal and metaplastic epithelium are extracted for the classification step. For each superpixel, the three mean intensities of each color channel are firstly selected as features. Three added features are the mean intensities for each superpixel after separately applying to the red-channel image three different morphological filters (top-hat filtering, entropy filtering and range filtering). The last two features require the computation of the Grey-Level Co-Occurrence Matrix (GLCM), and are reflective of the contrast and the homogeneity of each superpixel. The classification step is performed using an ensemble of 50 classification trees, with a 10-fold cross-validation scheme by training the classifier at each step on a random 70% of the images and testing on the remaining 30% of the dataset. Sensitivity and Specificity are respectively of 79.2% and 87.3%, with an overall accuracy of 83.9%.
Duan, Xiaohui; Ban, Xiaohua; Zhang, Xiang; Hu, Huijun; Li, Guozhao; Wang, Dongye; Wang, Charles Qian; Zhang, Fang; Shen, Jun
2016-12-01
To determine MR imaging features and staging accuracy of neuroendocrine carcinomas (NECs) of the uterine cervix with pathological correlations. Twenty-six patients with histologically proven NECs, 60 patients with squamous cell carcinomas (SCCs), and 30 patients with adenocarcinomas of the uterine cervix were included. The clinical data, pathological findings, and MRI findings were reviewed retrospectively. MRI features of cervical NECs, SCCs, and adenocarcinomas were compared, and MRI staging of cervical NECs was compared with the pathological staging. Cervical NECs showed a higher tendency toward a homogeneous signal intensity on T2-weighted imaging and a homogeneous enhancement pattern, as well as a lower ADC value of tumour and a higher incidence of lymphadenopathy, compared with SCCs and adenocarcinomas (P < 0.05). An ADC value cutoff of 0.90 × 10 -3 mm 2 /s was robust for differentiation between cervical NECs and other cervical cancers, with a sensitivity of 63.3 % and a specificity of 95 %. In 21 patients who underwent radical hysterectomy and lymphadenectomy, the overall accuracy of tumour staging by MR imaging was 85.7 % with reference to pathology staging. Homogeneous lesion texture and low ADC value are likely suggestive features of cervical NECs and MR imaging is reliable for the staging of cervical NECs. • Cervical NECs show a tendency of lesion homogeneity and lymphadenopathy • Low ADC values are found in cervical NECs • MRI is an accurate imaging modality for the cervical NEC staging.
AE (Acoustic Emission) for Flip-Chip CGA/FCBGA Defect Detection
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2014-01-01
C-mode scanning acoustic microscopy (C-SAM) is a nondestructive inspection technique that uses ultrasound to show the internal feature of a specimen. A very high or ultra-high-frequency ultrasound passes through a specimen to produce a visible acoustic microimage (AMI) of its inner features. As ultrasound travels into a specimen, the wave is absorbed, scattered or reflected. The response is highly sensitive to the elastic properties of the materials and is especially sensitive to air gaps. This specific characteristic makes AMI the preferred method for finding "air gaps" such as delamination, cracks, voids, and porosity. C-SAM analysis, which is a type of AMI, was widely used in the past for evaluation of plastic microelectronic circuits, especially for detecting delamination of direct die bonding. With the introduction of the flip-chip die attachment in a package; its use has been expanded to nondestructive characterization of the flip-chip solder bumps and underfill. Figure 1.1 compares visual and C-SAM inspection approaches for defect detection, especially for solder joint interconnections and hidden defects. C-SAM is specifically useful for package features like internal cracks and delamination. C-SAM not only allows for the visualization of the interior features, it has the ability to produce images on layer-by-layer basis. Visual inspection; however, is only superior to C-SAM for the exposed features including solder dewetting, microcracks, and contamination. Ideally, a combination of various inspection techniques - visual, optical and SEM microscopy, C-SAM, and X-ray - need to be performed in order to assure quality at part, package, and system levels. This reports presents evaluations performed on various advanced packages/assemblies, especially the flip-chip die version of ball grid array/column grid array (BGA/CGA) using C-SAM equipment. Both external and internal equipment was used for evaluation. The outside facility provided images of the key features that could be detected using the most advanced C-SAM equipment with a skilled operator. Investigation continued using in-house equipment with its limitations. For comparison, representative X-rays of the assemblies were also gathered to show key defect detection features of these non-destructive techniques. Key images gathered and compared are: Compared the images of 2D X-ray and C-SAM for a plastic LGA assembly showing features that could be detected by either NDE technique. For this specific case, X-ray was a clear winner. Evaluated flip-chip CGA and FCBGA assemblies with and without heat sink by C-SAM. Only the FCCGA package that had no heat sink could be fully analyzed for underfill and bump quality. Cross-sectional microscopy did not revealed peripheral delamination features detected by C-SAM. Analyzed a number of fine pitch PBGA assemblies by C-SAM. Even though the internal features of the package assemblies could be detected, C-SAM was unable to detect solder joint failure at either the package or board level. Twenty times touch ups by solder iron with 700degF tip temperature, each with about 5 second duration, did not induce defects to be detected by C-SAM images. Other techniques need to be considered to induce known defects for characterization. Given NASA's emphasis on the use of microelectronic packages and assemblies and quality assurance on workmanship defect detection, understanding key features of various inspection systems that detect defects in the early stages of package and assembly is critical to developing approaches that will minimize future failures. Additional specific, tailored non-destructive inspection approaches could enable low-risk insertion of these advanced electronic packages having hidden and fine features.
Investigation of Hall Effect Thruster Channel Wall Erosion Mechanisms
2016-08-02
pretest height and laser image, c, d) post - test height and laser image. On all the pre-roughened samples, a cell-pattern developed from the random...7.8: Pre and post - test sample microscopy: Fused silica sample SA6 (loaded), 20x, center of exposed surface, a, b) pretest height and laser image, c, d...stress on the surface features developed during plasma erosion. The experiment is also designed specifically to test the SRH. A test fixture is
Object-oriented feature-tracking algorithms for SAR images of the marginal ice zone
NASA Technical Reports Server (NTRS)
Daida, Jason; Samadani, Ramin; Vesecky, John F.
1990-01-01
An unsupervised method that chooses and applies the most appropriate tracking algorithm from among different sea-ice tracking algorithms is reported. In contrast to current unsupervised methods, this method chooses and applies an algorithm by partially examining a sequential image pair to draw inferences about what was examined. Based on these inferences the reported method subsequently chooses which algorithm to apply to specific areas of the image pair where that algorithm should work best.
Joint Feature Selection and Classification for Multilabel Learning.
Huang, Jun; Li, Guorong; Huang, Qingming; Wu, Xindong
2018-03-01
Multilabel learning deals with examples having multiple class labels simultaneously. It has been applied to a variety of applications, such as text categorization and image annotation. A large number of algorithms have been proposed for multilabel learning, most of which concentrate on multilabel classification problems and only a few of them are feature selection algorithms. Current multilabel classification models are mainly built on a single data representation composed of all the features which are shared by all the class labels. Since each class label might be decided by some specific features of its own, and the problems of classification and feature selection are often addressed independently, in this paper, we propose a novel method which can perform joint feature selection and classification for multilabel learning, named JFSC. Different from many existing methods, JFSC learns both shared features and label-specific features by considering pairwise label correlations, and builds the multilabel classifier on the learned low-dimensional data representations simultaneously. A comparative study with state-of-the-art approaches manifests a competitive performance of our proposed method both in classification and feature selection for multilabel learning.
Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio
2018-02-01
Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.
1986-01-14
Range : 12.9 million miles (8.0 million miles) P-29468C This false color Voyager photograph of Uranus shows a discrete cloud seen as a bright streak near the planets limb. The cloud visible here is the most prominent feature seen in a series of Voyager images designed to track atmospheric motions. The occasional donut shaped features, including one at the bottom, are shadows cast by dust on the camera optics. The picture is a highly processed composite of three images. The processing necessary to bring out the faint features on the planet also brings out these camera blemishes. The three seperate images used where shot through violet, blue, and orange filters. Each color image showd the cloud to a different degree; because they were not exposed at the same time , the images were processed to provide a good spatial match. In a true color image, the cloud would be barely discernable; the false color helps to bring out additional details. The different colors imply variations in vertical structure, but as of yet it is not possible to be specific about such differences. One possiblity is that the uranian atmosphere may contain smog like constituents, in which case some color differences may represent differences in how these molecules are distributed.
Zheng, Yingyan; Xiao, Zebin; Zhang, Hua; She, Dejun; Lin, Xuehua; Lin, Yu; Cao, Dairong
2018-04-01
To evaluate the discriminative value of conventional magnetic resonance imaging between benign and malignant palatal tumors. Conventional magnetic resonance imaging features of 130 patients with palatal tumors confirmed by histopathologic examination were retrospectively reviewed. Clinical data and imaging findings were assessed between benign and malignant tumors and between benign and low-grade malignant salivary gland tumors. The variables that were significant in differentiating benign from malignant lesions were further identified using logistic regression analysis. Moreover, imaging features of each common palatal histologic entity were statistically analyzed with the rest of the tumors to define their typical imaging features. Older age, partially defined and ill-defined margins, and absence of a capsule were highly suggestive of malignant palatal tumors, especially ill-defined margins (β = 6.400). The precision in determining malignant palatal tumors achieved a sensitivity of 92.8% and a specificity of 85.6%. In addition, irregular shape, ill-defined margins, lack of a capsule, perineural spread, and invasion of surrounding structures were more often associated with low-grade malignant salivary gland tumors. Conventional magnetic resonance imaging is useful for differentiating benign from malignant palatal tumors as well as benign salivary gland tumors from low-grade salivary gland malignancies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Landmark-based deep multi-instance learning for brain disease diagnosis.
Liu, Mingxia; Zhang, Jun; Adeli, Ehsan; Shen, Dinggang
2018-01-01
In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches. Copyright © 2017 Elsevier B.V. All rights reserved.
Jaya, T; Dheeba, J; Singh, N Albert
2015-12-01
Diabetic retinopathy is a major cause of vision loss in diabetic patients. Currently, there is a need for making decisions using intelligent computer algorithms when screening a large volume of data. This paper presents an expert decision-making system designed using a fuzzy support vector machine (FSVM) classifier to detect hard exudates in fundus images. The optic discs in the colour fundus images are segmented to avoid false alarms using morphological operations and based on circular Hough transform. To discriminate between the exudates and the non-exudates pixels, colour and texture features are extracted from the images. These features are given as input to the FSVM classifier. The classifier analysed 200 retinal images collected from diabetic retinopathy screening programmes. The tests made on the retinal images show that the proposed detection system has better discriminating power than the conventional support vector machine. With the best combination of FSVM and features sets, the area under the receiver operating characteristic curve reached 0.9606, which corresponds to a sensitivity of 94.1% with a specificity of 90.0%. The results suggest that detecting hard exudates using FSVM contribute to computer-assisted detection of diabetic retinopathy and as a decision support system for ophthalmologists.
Masks in Imaging Flow Cytometry
Dominical, Venina; Samsel, Leigh; McCoy, J. Philip
2016-01-01
Data analysis in imaging flow cytometry incorporates elements of flow cytometry together with other aspects of morphological analysis of images. A crucial early step in this analysis is the creation of a mask to distinguish the portion of the image upon which further examination of specified features can be performed. Default masks are provided by the manufacturer of the imaging flow cytometer but additional custom masks can be created by the individual user for specific applications. Flawed or inaccurate masks can have a substantial negative impact on the overall analysis of a sample, thus great care must be taken to ensure the accuracy of masks. Here we discuss various types of masks and cite examples of their use. Furthermore we provide our insight for how to approach selecting and assessing the optimal mask for a specific analysis. PMID:27461256
Nano-Optics for Chemical and Materials Characterization
NASA Astrophysics Data System (ADS)
Beversluis, Michael; Stranick, Stephan
2007-03-01
Light microscopy can provide non-destructive, real-time, three-dimensional imaging with chemically-specific contrast, but diffraction frequently limits the resolution to roughly 200 nm. Recently, structured illumination techniques have allowed fluorescence imaging to reach 50 nm resolution [1]. Since these fluorescence techniques were developed for use in microbiology, a key challenge is to take the resolution-enhancing features and apply them to contrast mechanisms like vibrational spectroscopy (e.g., Raman and CARS microscopy) that provide morphological and chemically specific imaging.. We are developing a new hybrid technique that combines the resolution enhancement of structured illumination microscopy with scanning techniques that can record hyperspectral images with 100 nm spatial resolution. We will show such superresolving images of semiconductor nanostructures and discuss the advantages and requirements for this technique. Referenence: 1. M. G. L. Gustafsson, P. Natl. Acad. Sci. USA 102, 13081-13086 (2005).
NASA Astrophysics Data System (ADS)
Le, Minh Hung; Chen, Jingyu; Wang, Liang; Wang, Zhiwei; Liu, Wenyu; (Tim Cheng, Kwang-Ting; Yang, Xin
2017-08-01
Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444-55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083-92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403-13, Niaf et al 2014 IEEE Trans. Image Process. 23 979-91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787-96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to ‘see’ the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying on handcrafted features.
Localized Chemical Remodeling for Live Cell Imaging of Protein-Specific Glycoform.
Hui, Jingjing; Bao, Lei; Li, Siqiao; Zhang, Yi; Feng, Yimei; Ding, Lin; Ju, Huangxian
2017-07-03
Live cell imaging of protein-specific glycoforms is important for the elucidation of glycosylation mechanisms and identification of disease states. The currently used metabolic oligosaccharide engineering (MOE) technology permits routinely global chemical remodeling (GCM) for carbohydrate site of interest, but can exert unnecessary whole-cell scale perturbation and generate unpredictable metabolic efficiency issue. A localized chemical remodeling (LCM) strategy for efficient and reliable access to protein-specific glycoform information is reported. The proof-of-concept protocol developed for MUC1-specific terminal galactose/N-acetylgalactosamine (Gal/GalNAc) combines affinity binding, off-on switchable catalytic activity, and proximity catalysis to create a reactive handle for bioorthogonal labeling and imaging. Noteworthy assay features associated with LCM as compared with MOE include minimum target cell perturbation, short reaction timeframe, effectiveness as a molecular ruler, and quantitative analysis capability. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paganelli, Chiara, E-mail: chiara.paganelli@polimi.it; Seregni, Matteo; Fattori, Giovanni
Purpose: This study applied automatic feature detection on cine–magnetic resonance imaging (MRI) liver images in order to provide a prospective comparison between MRI-guided and surrogate-based tracking methods for motion-compensated liver radiation therapy. Methods and Materials: In a population of 30 subjects (5 volunteers plus 25 patients), 2 oblique sagittal slices were acquired across the liver at high temporal resolution. An algorithm based on scale invariant feature transform (SIFT) was used to extract and track multiple features throughout the image sequence. The position of abdominal markers was also measured directly from the image series, and the internal motion of each featuremore » was quantified through multiparametric analysis. Surrogate-based tumor tracking with a state-of-the-art external/internal correlation model was simulated. The geometrical tracking error was measured, and its correlation with external motion parameters was also investigated. Finally, the potential gain in tracking accuracy relying on MRI guidance was quantified as a function of the maximum allowed tracking error. Results: An average of 45 features was extracted for each subject across the whole liver. The multi-parametric motion analysis reported relevant inter- and intrasubject variability, highlighting the value of patient-specific and spatially-distributed measurements. Surrogate-based tracking errors (relative to the motion amplitude) were were in the range 7% to 23% (1.02-3.57mm) and were significantly influenced by external motion parameters. The gain of MRI guidance compared to surrogate-based motion tracking was larger than 30% in 50% of the subjects when considering a 1.5-mm tracking error tolerance. Conclusions: Automatic feature detection applied to cine-MRI allows detailed liver motion description to be obtained. Such information was used to quantify the performance of surrogate-based tracking methods and to provide a prospective comparison with respect to MRI-guided radiation therapy, which could support the definition of patient-specific optimal treatment strategies.« less
Salient object detection method based on multiple semantic features
NASA Astrophysics Data System (ADS)
Wang, Chunyang; Yu, Chunyan; Song, Meiping; Wang, Yulei
2018-04-01
The existing salient object detection model can only detect the approximate location of salient object, or highlight the background, to resolve the above problem, a salient object detection method was proposed based on image semantic features. First of all, three novel salient features were presented in this paper, including object edge density feature (EF), object semantic feature based on the convex hull (CF) and object lightness contrast feature (LF). Secondly, the multiple salient features were trained with random detection windows. Thirdly, Naive Bayesian model was used for combine these features for salient detection. The results on public datasets showed that our method performed well, the location of salient object can be fixed and the salient object can be accurately detected and marked by the specific window.
MR imaging guidance for minimally invasive procedures
NASA Astrophysics Data System (ADS)
Wong, Terence Z.; Kettenbach, Joachim; Silverman, Stuart G.; Schwartz, Richard B.; Morrison, Paul R.; Kacher, Daniel F.; Jolesz, Ferenc A.
1998-04-01
Image guidance is one of the major challenges common to all minimally invasive procedures including biopsy, thermal ablation, endoscopy, and laparoscopy. This is essential for (1) identifying the target lesion, (2) planning the minimally invasive approach, and (3) monitoring the therapy as it progresses. MRI is an ideal imaging modality for this purpose, providing high soft tissue contrast and multiplanar imaging, capability with no ionizing radiation. An interventional/surgical MRI suite has been developed at Brigham and Women's Hospital which provides multiplanar imaging guidance during surgery, biopsy, and thermal ablation procedures. The 0.5T MRI system (General Electric Signa SP) features open vertical access, allowing intraoperative imaging to be performed. An integrated navigational system permits near real-time control of imaging planes, and provides interactive guidance for positioning various diagnostic and therapeutic probes. MR imaging can also be used to monitor cryotherapy as well as high temperature thermal ablation procedures sing RF, laser, microwave, or focused ultrasound. Design features of the interventional MRI system will be discussed, and techniques will be described for interactive image acquisition and tracking of interventional instruments. Applications for interactive and near-real-time imaging will be presented as well as examples of specific procedures performed using MRI guidance.
Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madhu; Harwood, David
1989-01-01
Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
Wu, Yu-Tzu; Nash, Paul; Barnes, Linda E; Minett, Thais; Matthews, Fiona E; Jones, Andy; Brayne, Carol
2014-10-22
An association between depressive symptoms and features of built environment has been reported in the literature. A remaining research challenge is the development of methods to efficiently capture pertinent environmental features in relevant study settings. Visual streetscape images have been used to replace traditional physical audits and directly observe the built environment of communities. The aim of this work is to examine the inter-method reliability of the two audit methods for assessing community environments with a specific focus on physical features related to mental health. Forty-eight postcodes in urban and rural areas of Cambridgeshire, England were randomly selected from an alphabetical list of streets hosted on a UK property website. The assessment was conducted in July and August 2012 by both physical and visual image audits based on the items in Residential Environment Assessment Tool (REAT), an observational instrument targeting the micro-scale environmental features related to mental health in UK postcodes. The assessor used the images of Google Street View and virtually "walked through" the streets to conduct the property and street level assessments. Gwet's AC1 coefficients and Bland-Altman plots were used to compare the concordance of two audits. The results of conducting the REAT by visual image audits generally correspond to direct observations. More variations were found in property level items regarding physical incivilities, with broad limits of agreement which importantly lead to most of the variation in the overall REAT score. Postcodes in urban areas had lower consistency between the two methods than rural areas. Google Street View has the potential to assess environmental features related to mental health with fair reliability and provide a less resource intense method of assessing community environments than physical audits.
Radiologic and histopathologic review of rare benign and malignant breast diseases
Dağıstan, Emine; Kızıldağ, Betül; Gürel, Safiye; Barut, Yüksel; Paşaoğlu, Esra
2017-01-01
High social awareness of breast diseases and the rise in breast imaging facilities have led to an increase in the detection of even rare benign and malignant breast lesions. Breast lesions are associated with a broad spectrum of imaging characteristics, and each radiologic imaging technique reflects different characteristics of them. We aimed to increase familiarity of the radiologist with these uncommon lesions as well as correlate histopathologic findings with the radiologic imaging features of the tumors. Histopathologic examination is necessary in the evaluation of such breast lesions, particularly when radiologic images are not definitive for a specific diagnosis. PMID:28508760
Classifying magnetic resonance image modalities with convolutional neural networks
NASA Astrophysics Data System (ADS)
Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis
2018-02-01
Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.
An automated distinction of DICOM images for lung cancer CAD system
NASA Astrophysics Data System (ADS)
Suzuki, H.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2009-02-01
Automated distinction of medical images is an important preprocessing in Computer-Aided Diagnosis (CAD) systems. The CAD systems have been developed using medical image sets with specific scan conditions and body parts. However, varied examinations are performed in medical sites. The specification of the examination is contained into DICOM textual meta information. Most DICOM textual meta information can be considered reliable, however the body part information cannot always be considered reliable. In this paper, we describe an automated distinction of DICOM images as a preprocessing for lung cancer CAD system. Our approach uses DICOM textual meta information and low cost image processing. Firstly, the textual meta information such as scan conditions of DICOM image is distinguished. Secondly, the DICOM image is set to distinguish the body parts which are identified by image processing. The identification of body parts is based on anatomical structure which is represented by features of three regions, body tissue, bone, and air. The method is effective to the practical use of lung cancer CAD system in medical sites.
Tu, Haohua; Boppart, Stephen A.
2015-01-01
Clinical translation of coherent anti-Stokes Raman scattering microscopy is of great interest because of the advantages of noninvasive label-free imaging, high sensitivity, and chemical specificity. For this to happen, we have identified and review the technical barriers that must be overcome. Prior investigations have developed advanced techniques (features), each of which can be used to effectively overcome one particular technical barrier. However, the implementation of one or a small number of these advanced features in previous attempts for clinical translation has often introduced more tradeoffs than benefits. In this review, we outline a strategy that would integrate multiple advanced features to overcome all the technical barriers simultaneously, effectively reduce tradeoffs, and synergistically optimize CARS microscopy for clinical translation. The operation of the envisioned system incorporates coherent Raman micro-spectroscopy for identifying vibrational biomolecular markers of disease and single-frequency (or hyperspectral) Raman imaging of these specific biomarkers for real-time in vivo diagnostics and monitoring. An optimal scheme of clinical CARS micro-spectroscopy for thin ex vivo tissues. PMID:23674234
Kim, Hae Young; Park, Ji Hoon; Lee, Yoon Jin; Lee, Sung Soo; Jeon, Jong-June; Lee, Kyoung Ho
2018-04-01
Purpose To perform a systematic review and meta-analysis to identify computed tomographic (CT) features for differentiating complicated appendicitis in patients suspected of having appendicitis and to summarize their diagnostic accuracy. Materials and Methods Studies on diagnostic accuracy of CT features for differentiating complicated appendicitis (perforated or gangrenous appendicitis) in patients suspected of having appendicitis were searched in Ovid-MEDLINE, EMBASE, and the Cochrane Library. Overlapping descriptors used in different studies to denote the same image finding were subsumed under a single CT feature. Pooled diagnostic accuracy of the CT features was calculated by using a bivariate random effects model. CT features with pooled diagnostic odds ratios with 95% confidence intervals not including 1 were considered as informative. Results Twenty-three studies were included, and 184 overlapping descriptors for various CT findings were subsumed under 14 features. Of these, 10 features were informative for complicated appendicitis. There was a general tendency for these features to show relatively high specificity but low sensitivity. Extraluminal appendicolith, abscess, appendiceal wall enhancement defect, extraluminal air, ileus, periappendiceal fluid collection, ascites, intraluminal air, and intraluminal appendicolith showed pooled specificity greater than 70% (range, 74%-100%), but sensitivity was limited (range, 14%-59%). Periappendiceal fat stranding was the only feature that showed high sensitivity (94%; 95% confidence interval: 86%, 98%) but low specificity (40%; 95% confidence interval, 23%, 60%). Conclusion Ten informative CT features for differentiating complicated appendicitis were identified in this study, nine of which showed high specificity, but low sensitivity. © RSNA, 2017 Online supplemental material is available for this article.
Veterinary software application for comparison of thermograms for pathology evaluation
NASA Astrophysics Data System (ADS)
Pant, Gita; Umbaugh, Scott E.; Dahal, Rohini; Lama, Norsang; Marino, Dominic J.; Sackman, Joseph
2017-09-01
The bilateral symmetry property in mammals allows for the detection of pathology by comparison of opposing sides. For any pathological disorder, thermal patterns differ compared to the normal body part. A software application for veterinary clinics has been under development to input two thermograms of body parts on both sides, one normal and the other unknown, and the application compares them based on extracted features and appropriate similarity and difference measures and outputs the likelihood of pathology. Here thermographic image data from 19° C to 40° C was linearly remapped to create images with 256 gray level values. Features were extracted from these images, including histogram, texture and spectral features. The comparison metrics used are the vector inner product, Tanimoto, Euclidean, city block, Minkowski and maximum value metric. Previous research with the anterior cruciate ligament (ACL) pathology in dogs suggested any thermogram variation below a threshold of 40% of Euclidean distance is normal and above 40% is abnormal. Here the 40% threshold was applied to a new ACL image set and achieved a sensitivity of 75%, an improvement from the 55% sensitivity of the previous work. With the new data set it was determined that using a threshold of 20% provided a much improved 92% sensitivity metric. However, this will require further research to determine the corresponding specificity success rate. Additionally, it was found that the anterior view provided better results than the lateral view. It was also determined that better results were obtained with all three feature sets than with just the histogram and texture sets. Further experiments are ongoing with larger image datasets, and pathologies, new features and comparison metric evaluation for determination of more accurate threshold values to separate normal and abnormal images.
Detection of Fundus Lesions Using Classifier Selection
NASA Astrophysics Data System (ADS)
Nagayoshi, Hiroto; Hiramatsu, Yoshitaka; Sako, Hiroshi; Himaga, Mitsutoshi; Kato, Satoshi
A system for detecting fundus lesions caused by diabetic retinopathy from fundus images is being developed. The system can screen the images in advance in order to reduce the inspection workload on doctors. One of the difficulties that must be addressed in completing this system is how to remove false positives (which tend to arise near blood vessels) without decreasing the detection rate of lesions in other areas. To overcome this difficulty, we developed classifier selection according to the position of a candidate lesion, and we introduced new features that can distinguish true lesions from false positives. A system incorporating classifier selection and these new features was tested in experiments using 55 fundus images with some lesions and 223 images without lesions. The results of the experiments confirm the effectiveness of the proposed system, namely, degrees of sensitivity and specificity of 98% and 81%, respectively.
NASA Astrophysics Data System (ADS)
Qi, K.; Qingfeng, G.
2017-12-01
With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.
Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L
2008-01-01
This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.
Imaging of non-neoplastic duodenal diseases. A pictorial review with emphasis on MDCT.
Juanpere, Sergi; Valls, Laia; Serra, Isabel; Osorio, Margarita; Gelabert, Arantxa; Maroto, Albert; Pedraza, Salvador
2018-04-01
A wide spectrum of abnormalities can affect the duodenum, ranging from congenital anomalies to traumatic and inflammatory entities. The location of the duodenum and its close relationship with other organs make it easy to miss or misinterpret duodenal abnormalities on cross-sectional imaging. Endoscopy has largely supplanted fluoroscopy for the assessment of the duodenal lumen. Cross-sectional imaging modalities, especially multidetector computed tomography (MDCT) and magnetic resonance imaging (MRI), enable comprehensive assessment of the duodenum and surrounding viscera. Although overlapping imaging findings can make it difficult to differentiate between some lesions, characteristic features may suggest a specific diagnosis in some cases. Familiarity with pathologic conditions that can affect the duodenum and with the optimal MDCT and MRI techniques for studying them can help ensure diagnostic accuracy in duodenal diseases. The goal of this pictorial review is to illustrate the most common non-malignant duodenal processes. Special emphasis is placed on MDCT features and their endoscopic correlation as well as on avoiding the most common pitfalls in the evaluation of the duodenum. • Cross-sectional imaging modalities enable comprehensive assessment of duodenum diseases. • Causes of duodenal obstruction include intraluminal masses, inflammation and hematomas. • Distinguishing between tumour and groove pancreatitis can be challenging by cross-sectional imaging. • Infectious diseases of the duodenum are difficult to diagnose, as the findings are not specific. • The most common cause of nonvariceal upper gastrointestinal bleeding is peptic ulcer disease.
NASA Astrophysics Data System (ADS)
Liebel, L.; Körner, M.
2016-06-01
In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.
Mineral mapping and applications of imaging spectroscopy
Clark, R.N.; Boardman, J.; Mustard, J.; Kruse, F.; Ong, C.; Pieters, C.; Swayze, G.A.
2006-01-01
Spectroscopy is a tool that has been used for decades to identify, understand, and quantify solid, liquid, or gaseous materials, especially in the laboratory. In disciplines ranging from astronomy to chemistry, spectroscopic measurements are used to detect absorption and emission features due to specific chemical bonds, and detailed analyses are used to determine the abundance and physical state of the detected absorbing/emitting species. Spectroscopic measurements have a long history in the study of the Earth and planets. Up to the 1990s remote spectroscopic measurements of Earth and planets were dominated by multispectral imaging experiments that collect high-quality images in a few, usually broad, spectral bands or with point spectrometers that obtained good spectral resolution but at only a few spatial positions. However, a new generation of sensors is now available that combines imaging with spectroscopy to create the new discipline of imaging spectroscopy. Imaging spectrometers acquire data with enough spectral range, resolution, and sampling at every pixel in a raster image so that individual absorption features can be identified and spatially mapped (Goetz et al., 1985).
Besga, Ariadna; Chyzhyk, Darya; González-Ortega, Itxaso; Savio, Alexandre; Ayerdi, Borja; Echeveste, Jon; Graña, Manuel; González-Pinto, Ana
2016-01-01
Late Onset Bipolar Disorder (LOBD) is the arousal of Bipolar Disorder (BD) at old age (>60) without any previous history of disorders. LOBD is often difficult to distinguish from degenerative dementias, such as Alzheimer Disease (AD), due to comorbidities and common cognitive symptoms. Moreover, LOBD prevalence is increasing due to population aging. Biomarkers extracted from blood plasma are not discriminant because both pathologies share pathophysiological features related to neuroinflammation, therefore we look for anatomical features highly correlated with blood biomarkers that allow accurate diagnosis prediction. This may shed some light on the basic biological mechanisms leading to one or another disease. Moreover, accurate diagnosis is needed to select the best personalized treatment. We look for white matter features which are correlated with blood plasma biomarkers (inflammatory and neurotrophic) discriminating LOBD from AD. A sample of healthy controls (HC) (n=19), AD patients (n=35), and BD patients (n=24) has been recruited at the Alava University Hospital. Plasma biomarkers have been obtained at recruitment time. Diffusion weighted (DWI) magnetic resonance imaging (MRI) are obtained for each subject. DWI is preprocessed to obtain diffusion tensor imaging (DTI) data, which is reduced to fractional anisotropy (FA) data. In the selection phase, eigenanatomy finds FA eigenvolumes maximally correlated with plasma biomarkers by partial sparse canonical correlation analysis (PSCCAN). In the analysis phase, we take the eigenvolume projection coefficients as the classification features, carrying out cross-validation of support vector machine (SVM) to obtain discrimination power of each biomarker effects. The John Hopkins Universtiy white matter atlas is used to provide anatomical localizations of the detected feature clusters. Classification results show that one specific biomarker of oxidative stress (malondialdehyde MDA) gives the best classification performance ( accuracy 85%, F-score 86%, sensitivity, and specificity 87%, ) in the discrimination of AD and LOBD. Discriminating features appear to be localized in the posterior limb of the internal capsule and superior corona radiata. It is feasible to support contrast diagnosis among LOBD and AD by means of predictive classifiers based on eigenanatomy features computed from FA imaging correlated to plasma biomarkers. In addition, white matter eigenanatomy localizations offer some new avenues to assess the differential pathophysiology of LOBD and AD.
Quantitative imaging as cancer biomarker
NASA Astrophysics Data System (ADS)
Mankoff, David A.
2015-03-01
The ability to assay tumor biologic features and the impact of drugs on tumor biology is fundamental to drug development. Advances in our ability to measure genomics, gene expression, protein expression, and cellular biology have led to a host of new targets for anticancer drug therapy. In translating new drugs into clinical trials and clinical practice, these same assays serve to identify patients most likely to benefit from specific anticancer treatments. As cancer therapy becomes more individualized and targeted, there is an increasing need to characterize tumors and identify therapeutic targets to select therapy most likely to be successful in treating the individual patient's cancer. Thus far assays to identify cancer therapeutic targets or anticancer drug pharmacodynamics have been based upon in vitro assay of tissue or blood samples. Advances in molecular imaging, particularly PET, have led to the ability to perform quantitative non-invasive molecular assays. Imaging has traditionally relied on structural and anatomic features to detect cancer and determine its extent. More recently, imaging has expanded to include the ability to image regional biochemistry and molecular biology, often termed molecular imaging. Molecular imaging can be considered an in vivo assay technique, capable of measuring regional tumor biology without perturbing it. This makes molecular imaging a unique tool for cancer drug development, complementary to traditional assay methods, and a potentially powerful method for guiding targeted therapy in clinical trials and clinical practice. The ability to quantify, in absolute measures, regional in vivo biologic parameters strongly supports the use of molecular imaging as a tool to guide therapy. This review summarizes current and future applications of quantitative molecular imaging as a biomarker for cancer therapy, including the use of imaging to (1) identify patients whose tumors express a specific therapeutic target; (2) determine whether the drug reaches the target; (3) identify an early response to treatment; and (4) predict the impact of therapy on long-term outcomes such as survival. The manuscript reviews basic concepts important in the application of molecular imaging to cancer drug therapy, in general, and will discuss specific examples of studies in humans, and highlight future directions, including ongoing multi-center clinical trials using molecular imaging as a cancer biomarker.
Sheikhzadeh, Fahime; Ward, Rabab K; Carraro, Anita; Chen, Zhao Yang; van Niekerk, Dirk; Miller, Dianne; Ehlen, Tom; MacAulay, Calum E; Follen, Michele; Lane, Pierre M; Guillaud, Martial
2015-10-24
Cervical cancer remains a major health problem, especially in developing countries. Colposcopic examination is used to detect high-grade lesions in patients with a history of abnormal pap smears. New technologies are needed to improve the sensitivity and specificity of this technique. We propose to test the potential of fluorescence confocal microscopy to identify high-grade lesions. We examined the quantification of ex vivo confocal fluorescence microscopy to differentiate among normal cervical tissue, low-grade Cervical Intraepithelial Neoplasia (CIN), and high-grade CIN. We sought to (1) quantify nuclear morphology and tissue architecture features by analyzing images of cervical biopsies; and (2) determine the accuracy of high-grade CIN detection via confocal microscopy relative to the accuracy of detection by colposcopic impression. Forty-six biopsies obtained from colposcopically normal and abnormal cervical sites were evaluated. Confocal images were acquired at different depths from the epithelial surface and histological images were analyzed using in-house software. The features calculated from the confocal images compared well with those features obtained from the histological images and histopathological reviews of the specimens (obtained by a gynecologic pathologist). The correlations between two of these features (the nuclear-cytoplasmic ratio and the average of three nearest Delaunay-neighbors distance) and the grade of dysplasia were higher than that of colposcopic impression. The sensitivity of detecting high-grade dysplasia by analysing images collected at the surface of the epithelium, and at 15 and 30 μm below the epithelial surface were respectively 100, 100, and 92 %. Quantitative analysis of confocal fluorescence images showed its capacity for discriminating high-grade CIN lesions vs. low-grade CIN lesions and normal tissues, at different depth of imaging. This approach could be used to help clinicians identify high-grade CIN in clinical settings.
Williams, Phillip A; Djordjevic, Bojana; Ayroud, Yasmine; Islam, Shahidul; Gravel, Denis; Robertson, Susan J; Parra-Herran, Carlos
2014-12-01
To identify morphometric features unique to flat epithelial atypia associated with cancer using digital image analysis. Cases with diagnosis of flat epithelial atypia were retrieved and divided into 2 groups: flat epithelial atypia associated with invasive or in situ carcinoma (n = 31) and those without malignancy (n = 27). Slides were digitally scanned. Nuclear features were analyzed on representative images at 20x magnification using digital morphometric software. Parameters related to nuclear shape and size (diameter, perimeter) were similar in both groups. However, cases with malignancy had significantly higher densitometric green (p = 0.02), red (p = 0.03), and grey (p = 0.02) scale levels as compared to cases without cancer. A mean grey densitometric level > 119.45 had 71% sensitivity and 70.4% specificity in detecting cases with concomitant carcinoma. Morphometry of features related to nuclear staining appears to be useful in predicting risk of concurrent malignancy in patients with flat epithelial atypia, when added to a comprehensive histopathologic evaluation.
Color model comparative analysis for breast cancer diagnosis using H and E stained images
NASA Astrophysics Data System (ADS)
Li, Xingyu; Plataniotis, Konstantinos N.
2015-03-01
Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.
Prioritizing Scientific Data for Transmission
NASA Technical Reports Server (NTRS)
Castano, Rebecca; Anderson, Robert; Estlin, Tara; DeCoste, Dennis; Gaines, Daniel; Mazzoni, Dominic; Fisher, Forest; Judd, Michele
2004-01-01
A software system has been developed for prioritizing newly acquired geological data onboard a planetary rover. The system has been designed to enable efficient use of limited communication resources by transmitting the data likely to have the most scientific value. This software operates onboard a rover by analyzing collected data, identifying potential scientific targets, and then using that information to prioritize data for transmission to Earth. Currently, the system is focused on the analysis of acquired images, although the general techniques are applicable to a wide range of data modalities. Image prioritization is performed using two main steps. In the first step, the software detects features of interest from each image. In its current application, the system is focused on visual properties of rocks. Thus, rocks are located in each image and rock properties, such as shape, texture, and albedo, are extracted from the identified rocks. In the second step, the features extracted from a group of images are used to prioritize the images using three different methods: (1) identification of key target signature (finding specific rock features the scientist has identified as important), (2) novelty detection (finding rocks we haven t seen before), and (3) representative rock sampling (finding the most average sample of each rock type). These methods use techniques such as K-means unsupervised clustering and a discrimination-based kernel classifier to rank images based on their interest level.
Texture analysis of pulmonary parenchyma in normal and emphysematous lung
NASA Astrophysics Data System (ADS)
Uppaluri, Renuka; Mitsa, Theophano; Hoffman, Eric A.; McLennan, Geoffrey; Sonka, Milan
1996-04-01
Tissue characterization using texture analysis is gaining increasing importance in medical imaging. We present a completely automated method for discriminating between normal and emphysematous regions from CT images. This method involves extracting seventeen features which are based on statistical, hybrid and fractal texture models. The best subset of features is derived from the training set using the divergence technique. A minimum distance classifier is used to classify the samples into one of the two classes--normal and emphysema. Sensitivity and specificity and accuracy values achieved were 80% or greater in most cases proving that texture analysis holds great promise in identifying emphysema.
MR imaging of spinal infection.
Tins, Bernhard J; Cassar-Pullicino, Victor N
2004-09-01
Magnetic resonance (MR) imaging plays a pivotal role in the diagnosis and management of spinal infection, enjoying a high sensitivity and specificity. A thorough understanding of spinal anatomy and the physicochemical pathological processes associated with infection is a desirable prerequisite allowing accurate interpretation of the disease process. Apart from confirmation of the disease, MR imaging is also best suited to excluding multifocal spinal involvement and the detection/exclusion of complications. It plays an essential role in the decision-making process concerning conservative versus surgical treatment and is also the best imaging method to monitor the effect of treatment. The MR features of infection confidently exclude tumor, degeneration, and so forth as the underlying process; differentiate pyogenic from granulomatous infections in most cases; and can suggest the rarer specific infective organisms. Copyright 2004 Thieme Medical Publishers, Inc.
Functional mesoporous silica nanoparticles for bio-imaging applications.
Cha, Bong Geun; Kim, Jaeyun
2018-03-22
Biomedical investigations using mesoporous silica nanoparticles (MSNs) have received significant attention because of their unique properties including controllable mesoporous structure, high specific surface area, large pore volume, and tunable particle size. These unique features make MSNs suitable for simultaneous diagnosis and therapy with unique advantages to encapsulate and load a variety of therapeutic agents, deliver these agents to the desired location, and release the drugs in a controlled manner. Among various clinical areas, nanomaterials-based bio-imaging techniques have advanced rapidly with the development of diverse functional nanoparticles. Due to the unique features of MSNs, an imaging agent supported by MSNs can be a promising system for developing targeted bio-imaging contrast agents with high structural stability and enhanced functionality that enable imaging of various modalities. Here, we review the recent achievements on the development of functional MSNs for bio-imaging applications, including optical imaging, magnetic resonance imaging (MRI), positron emission tomography (PET), computed tomography (CT), ultrasound imaging, and multimodal imaging for early diagnosis. With further improvement in noninvasive bio-imaging techniques, the MSN-supported imaging agent systems are expected to contribute to clinical applications in the future. This article is categorized under: Diagnostic Tools > In vivo Nanodiagnostics and Imaging Nanotechnology Approaches to Biology > Nanoscale Systems in Biology. © 2018 Wiley Periodicals, Inc.
Micro-anatomical quantitative optical imaging: toward automated assessment of breast tissues.
Dobbs, Jessica L; Mueller, Jenna L; Krishnamurthy, Savitri; Shin, Dongsuk; Kuerer, Henry; Yang, Wei; Ramanujam, Nirmala; Richards-Kortum, Rebecca
2015-08-20
Pathologists currently diagnose breast lesions through histologic assessment, which requires fixation and tissue preparation. The diagnostic criteria used to classify breast lesions are qualitative and subjective, and inter-observer discordance has been shown to be a significant challenge in the diagnosis of selected breast lesions, particularly for borderline proliferative lesions. Thus, there is an opportunity to develop tools to rapidly visualize and quantitatively interpret breast tissue morphology for a variety of clinical applications. Toward this end, we acquired images of freshly excised breast tissue specimens from a total of 34 patients using confocal fluorescence microscopy and proflavine as a topical stain. We developed computerized algorithms to segment and quantify nuclear and ductal parameters that characterize breast architectural features. A total of 33 parameters were evaluated and used as input to develop a decision tree model to classify benign and malignant breast tissue. Benign features were classified in tissue specimens acquired from 30 patients and malignant features were classified in specimens from 22 patients. The decision tree model that achieved the highest accuracy for distinguishing between benign and malignant breast features used the following parameters: standard deviation of inter-nuclear distance and number of duct lumens. The model achieved 81 % sensitivity and 93 % specificity, corresponding to an area under the curve of 0.93 and an overall accuracy of 90 %. The model classified IDC and DCIS with 92 % and 96 % accuracy, respectively. The cross-validated model achieved 75 % sensitivity and 93 % specificity and an overall accuracy of 88 %. These results suggest that proflavine staining and confocal fluorescence microscopy combined with image analysis strategies to segment morphological features could potentially be used to quantitatively diagnose freshly obtained breast tissue at the point of care without the need for tissue preparation.
Pattern Recognition Approaches for Breast Cancer DCE-MRI Classification: A Systematic Review.
Fusco, Roberta; Sansone, Mario; Filice, Salvatore; Carone, Guglielmo; Amato, Daniela Maria; Sansone, Carlo; Petrillo, Antonella
2016-01-01
We performed a systematic review of several pattern analysis approaches for classifying breast lesions using dynamic, morphological, and textural features in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Several machine learning approaches, namely artificial neural networks (ANN), support vector machines (SVM), linear discriminant analysis (LDA), tree-based classifiers (TC), and Bayesian classifiers (BC), and features used for classification are described. The findings of a systematic review of 26 studies are presented. The sensitivity and specificity are respectively 91 and 83 % for ANN, 85 and 82 % for SVM, 96 and 85 % for LDA, 92 and 87 % for TC, and 82 and 85 % for BC. The sensitivity and specificity are respectively 82 and 74 % for dynamic features, 93 and 60 % for morphological features, 88 and 81 % for textural features, 95 and 86 % for a combination of dynamic and morphological features, and 88 and 84 % for a combination of dynamic, morphological, and other features. LDA and TC have the best performance. A combination of dynamic and morphological features gives the best performance.
Integrating prior information into microwave tomography Part 1: Impact of detail on image quality.
Kurrant, Douglas; Baran, Anastasia; LoVetri, Joe; Fear, Elise
2017-12-01
The authors investigate the impact that incremental increases in the level of detail of patient-specific prior information have on image quality and the convergence behavior of an inversion algorithm in the context of near-field microwave breast imaging. A methodology is presented that uses image quality measures to characterize the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The approach permits key aspects that impact the quality of reconstruction of these structures to be identified and quantified. This provides insight into opportunities to improve image reconstruction performance. Patient-specific information is acquired using radar-based methods that form a regional map of the breast. This map is then incorporated into a microwave tomography algorithm. Previous investigations have demonstrated the effectiveness of this approach to improve image quality when applied to data generated with two-dimensional (2D) numerical models. The present study extends this work by generating prior information that is customized to vary the degree of structural detail to facilitate the investigation of the role of prior information in image formation. Numerical 2D breast models constructed from magnetic resonance (MR) scans, and reconstructions formed with a three-dimensional (3D) numerical breast model are used to assess if trends observed for the 2D results can be extended to 3D scenarios. For the blind reconstruction scenario (i.e., no prior information), the breast surface is not accurately identified and internal structures are not clearly resolved. A substantial improvement in image quality is achieved by incorporating the skin surface map and constraining the imaging domain to the breast. Internal features within the breast appear in the reconstructed image. However, it is challenging to discriminate between adipose and glandular regions and there are inaccuracies in both the structural properties of the glandular region and the dielectric properties reconstructed within this structure. Using a regional map with a skin layer only marginally improves this situation. Increasing the structural detail in the prior information to include internal features leads to reconstructions for which the interface that delineates the fat and gland regions can be inferred. Different features within the glandular region corresponding to tissues with varying relative permittivity values, such as a lesion embedded within glandular structure, emerge in the reconstructed images. Including knowledge of the breast surface and skin layer leads to a substantial improvement in image quality compared to the blind case, but the images have limited diagnostic utility for applications such as tumor response tracking. The diagnostic utility of the reconstruction technique is improved considerably when patient-specific structural information is used. This qualitative observation is supported quantitatively with image metrics. © 2017 American Association of Physicists in Medicine.
Satheesha, T. Y.; Prasad, M. N. Giri; Dhruve, Kashyap D.
2017-01-01
Melanoma mortality rates are the highest amongst skin cancer patients. Melanoma is life threating when it grows beyond the dermis of the skin. Hence, depth is an important factor to diagnose melanoma. This paper introduces a non-invasive computerized dermoscopy system that considers the estimated depth of skin lesions for diagnosis. A 3-D skin lesion reconstruction technique using the estimated depth obtained from regular dermoscopic images is presented. On basis of the 3-D reconstruction, depth and 3-D shape features are extracted. In addition to 3-D features, regular color, texture, and 2-D shape features are also extracted. Feature extraction is critical to achieve accurate results. Apart from melanoma, in-situ melanoma the proposed system is designed to diagnose basal cell carcinoma, blue nevus, dermatofibroma, haemangioma, seborrhoeic keratosis, and normal mole lesions. For experimental evaluations, the PH2, ISIC: Melanoma Project, and ATLAS dermoscopy data sets is considered. Different feature set combinations is considered and performance is evaluated. Significant performance improvement is reported the post inclusion of estimated depth and 3-D features. The good classification scores of sensitivity = 96%, specificity = 97% on PH2 data set and sensitivity = 98%, specificity = 99% on the ATLAS data set is achieved. Experiments conducted to estimate tumor depth from 3-D lesion reconstruction is presented. Experimental results achieved prove that the proposed computerized dermoscopy system is efficient and can be used to diagnose varied skin lesion dermoscopy images. PMID:28512610
Breast cancer Ki67 expression preoperative discrimination by DCE-MRI radiomics features
NASA Astrophysics Data System (ADS)
Ma, Wenjuan; Ji, Yu; Qin, Zhuanping; Guo, Xinpeng; Jian, Xiqi; Liu, Peifang
2018-02-01
To investigate whether quantitative radiomics features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) are associated with Ki67 expression of breast cancer. In this institutional review board approved retrospective study, we collected 377 cases Chinese women who were diagnosed with invasive breast cancer in 2015. This cohort included 53 low-Ki67 expression (Ki67 proliferation index less than 14%) and 324 cases with high-Ki67 expression (Ki67 proliferation index more than 14%). A binary-classification of low- vs. high- Ki67 expression was performed. A set of 52 quantitative radiomics features, including morphological, gray scale statistic, and texture features, were extracted from the segmented lesion area. Three most common machine learning classification methods, including Naive Bayes, k-Nearest Neighbor and support vector machine with Gaussian kernel, were employed for the classification and the least absolute shrink age and selection operator (LASSO) method was used to select most predictive features set for the classifiers. Classification performance was evaluated by the area under receiver operating characteristic curve (AUC), accuracy, sensitivity and specificity. The model that used Naive Bayes classification method achieved the best performance than the other two methods, yielding 0.773 AUC value, 0.757 accuracy, 0.777 sensitivity and 0.769 specificity. Our study showed that quantitative radiomics imaging features of breast tumor extracted from DCE-MRI are associated with breast cancer Ki67 expression. Future larger studies are needed in order to further evaluate the findings.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Quantitative diagnosis of tongue cancer from histological images in an animal model
NASA Astrophysics Data System (ADS)
Lu, Guolan; Qin, Xulei; Wang, Dongsheng; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Chen, Zhuo G.; Fei, Baowei
2016-03-01
We developed a chemically-induced oral cancer animal model and a computer aided method for tongue cancer diagnosis. The animal model allows us to monitor the progress of the lesions over time. Tongue tissue dissected from mice was sent for histological processing. Representative areas of hematoxylin and eosin stained tissue from tongue sections were captured for classifying tumor and non-tumor tissue. The image set used in this paper consisted of 214 color images (114 tumor and 100 normal tissue samples). A total of 738 color, texture, morphometry and topology features were extracted from the histological images. The combination of image features from epithelium tissue and its constituent nuclei and cytoplasm has been demonstrated to improve the classification results. With ten iteration nested cross validation, the method achieved an average sensitivity of 96.5% and a specificity of 99% for tongue cancer detection. The next step of this research is to apply this approach to human tissue for computer aided diagnosis of tongue cancer.
Diagnostic features of Alzheimer's disease extracted from PET sinograms
NASA Astrophysics Data System (ADS)
Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.
2002-01-01
Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.
Anatomy and imaging of the normal meninges.
Patel, Neel; Kirmi, Olga
2009-12-01
The meninges are an important connective tissue envelope investing the brain. Their function is to provide a protective coating to the brain and also participate in the formation of blood-brain barrier. Understanding their anatomy is fundamental to understanding the location and spread of pathologies in relation to the layers. It also provides an insight into the characteristics of such pathologies when imaging them. This review aims to describe the anatomy of the meninges, and to demonstrate the imaging findings of specific features.
Multiscale Image Processing of Solar Image Data
NASA Astrophysics Data System (ADS)
Young, C.; Myers, D. C.
2001-12-01
It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.
Molina, D.; Pérez-Beteta, J.; Martínez-González, A.; Velásquez, C.; Martino, J.; Luque, B.; Revert, A.; Herruzo, I.; Arana, E.; Pérez-García, V. M.
2017-01-01
Abstract Introduction: Textural analysis refers to a variety of mathematical methods used to quantify the spatial variations in grey levels within images. In brain tumors, textural features have a great potential as imaging biomarkers having been shown to correlate with survival, tumor grade, tumor type, etc. However, these measures should be reproducible under dynamic range and matrix size changes for their clinical use. Our aim is to study this robustness in brain tumors with 3D magnetic resonance imaging, not previously reported in the literature. Materials and methods: 3D T1-weighted images of 20 patients with glioblastoma (64.80 ± 9.12 years-old) obtained from a 3T scanner were analyzed. Tumors were segmented using an in-house semi-automatic 3D procedure. A set of 16 3D textural features of the most common types (co-occurrence and run-length matrices) were selected, providing regional (run-length based measures) and local information (co-ocurrence matrices) on the tumor heterogeneity. Feature robustness was assessed by means of the coefficient of variation (CV) under both dynamic range (16, 32 and 64 gray levels) and/or matrix size (256x256 and 432x432) changes. Results: None of the textural features considered were robust under dynamic range changes. The textural co-occurrence matrix feature Entropy was the only textural feature robust (CV < 10%) under spatial resolution changes. Conclusions: In general, textural measures of three-dimensional brain tumor images are neither robust under dynamic range nor under matrix size changes. Thus, it becomes mandatory to fix standards for image rescaling after acquisition before the textural features are computed if they are to be used as imaging biomarkers. For T1-weighted images a dynamic range of 16 grey levels and a matrix size of 256x256 (and isotropic voxel) is found to provide reliable and comparable results and is feasible with current MRI scanners. The implications of this work go beyond the specific tumor type and MRI sequence studied here and pose the need for standardization in textural feature calculation of oncological images. FUNDING: James S. Mc. Donnell Foundation (USA) 21st Century Science Initiative in Mathematical and Complex Systems Approaches for Brain Cancer [Collaborative award 220020450 and planning grant 220020420], MINECO/FEDER [MTM2015-71200-R], JCCM [PEII-2014-031-P].
Biomarker-specific conjugated nanopolyplexes for the active coloring of stem-like cancer cells
NASA Astrophysics Data System (ADS)
Hong, Yoochan; Lee, Eugene; Choi, Jihye; Haam, Seungjoo; Suh, Jin-Suck; Yang, Jaemoon
2016-06-01
Stem-like cancer cells possess intrinsic features and their CD44 regulate redox balance in cancer cells to survive under stress conditions. Thus, we have fabricated biomarker-specific conjugated polyplexes using CD44-targetable hyaluronic acid and redox-sensible polyaniline based on a nanoemulsion method. For the most sensitive recognition of the cellular redox at a single nanoparticle scale, a nano-scattering spectrum imaging analyzer system was introduced. The conjugated polyplexes showed a specific targeting ability toward CD44-expressing cancer cells as well as a dramatic change in its color, which depended on the redox potential in the light-scattered images. Therefore, these polyaniline-based conjugated polyplexes as well as analytical processes that include light-scattering imaging and measurements of scattering spectra, clearly establish a systematic method for the detection and monitoring of cancer microenvironments.
The role of magnetic resonance imaging and ultrasound in patients with adnexal masses.
Sohaib, S A; Mills, T D; Sahdev, A; Webb, J A W; Vantrappen, P O; Jacobs, I J; Reznek, R H
2005-03-01
To evaluate the accuracy of ultrasonography (US) and magnetic resonance imaging (MRI) in characterizing adnexal masses, and to determine which patients may benefit from MRI. We prospectively studied 72 women (mean age 53 years, range 19 to 86 years) with clinically suspected adnexal masses. A single experienced sonographer performed transabdominal and transvaginal greyscale spectral and colour Doppler examinations. MRI was carried out on a 1.5T system using T1, T2 and fat-suppressed T1-weighted sequences before and after intravenous injection of gadolinium. The adnexal masses were categorized as benign or malignant without knowledge of clinical details, according to the imaging features which were compared with the surgical and pathological findings. For characterizing lesions as malignant, the sensitivity, specificity and accuracy of MRI were 96.6%, 83.7% and 88.9%, respectively, and of US were 100%, 39.5% and 63.9%, respectively. MRI was more specific (p<0.05) than US. Both MRI and US correctly diagnosed 17 (24%) cases with benign and 28 (39%) cases with malignant masses. MRI correctly diagnosed 19 (26%) cases with benign lesion(s), which on US were thought to be malignant. The age, menopausal status and CA-125 levels in these women made benign disease likely, but US features were suggestive of malignancy (large masses and solid-cystic lesions with nodules). MRI is more specific and accurate than US and Doppler assessment for characterizing adnexal masses. Women who clinically have a relatively low risk of malignancy but who have complex sonographic features may benefit from MRI.
Classification of melanoma lesions using sparse coded features and random forests
NASA Astrophysics Data System (ADS)
Rastgoo, Mojdeh; Lemaître, Guillaume; Morel, Olivier; Massich, Joan; Garcia, Rafael; Meriaudeau, Fabrice; Marzani, Franck; Sidibé, Désiré
2016-03-01
Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.
Computer-aided diagnostic approach of dermoscopy images acquiring relevant features
NASA Astrophysics Data System (ADS)
Castillejos-Fernández, H.; Franco-Arcega, A.; López-Ortega, O.
2016-09-01
In skin cancer detection, automated analysis of borders, colors, and structures of a lesion relies upon an accurate segmentation process and it is an important first step in any Computer-Aided Diagnosis (CAD) system. However, irregular and disperse lesion borders, low contrast, artifacts in images and variety of colors within the interest region make the problem difficult. In this paper, we propose an efficient approach of automatic classification which considers specific lesion features. First, for the selection of lesion skin we employ the segmentation algorithm W-FCM.1 Then, in the feature extraction stage we consider several aspects: the area of the lesion, which is calculated by correlating axes and we calculate the specific the value of asymmetry in both axes. For color analysis we employ an ensemble of clusterers including K-Means, Fuzzy K-Means and Kohonep maps, all of which estimate the presence of one or more colors defined in ABCD rule and the values for each of the segmented colors. Another aspect to consider is the type of structures that appear in the lesion Those are defined by using the ell-known GLCM method. During the classification stage we compare several methods in order to define if the lesion is benign or malignant. An important contribution of the current approach in segmentation-classification problem resides in the use of information from all color channels together, as well as the measure of each color in the lesion and the axes correlation. The segmentation and classification measures have been performed using sensibility, specificity, accuracy and AUC metric over a set of dermoscopy images from ISDIS data set
Sivakamasundari, J; Kavitha, G; Sujatha, C M; Ramakrishnan, S
2014-01-01
Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Real-Time mass screening system for DR is vital for timely diagnosis and periodic screening to prevent the patient from severe visual loss. Human retinal fundus images are widely used for an automated segmentation of blood vessel and diagnosis of various blood vessel disorders. In this work, an attempt has been made to perform hardware synthesis of Kirsch template based edge detection for segmentation of blood vessels. This method is implemented using LabVIEW software and is synthesized in field programmable gate array board to yield results in real-time application. The segmentation of blood vessels using Kirsch based edge detection is compared with other edge detection methods such as Sobel, Prewitt and Canny. The texture features such as energy, entropy, contrast, mean, homogeneity and structural feature namely ratio of vessel to vessel free area are obtained from the segmented images. The performance of segmentation is analysed in terms of sensitivity, specificity and accuracy. It is observed from the results that the Kirsch based edge detection technique segmented the edges of blood vessels better than other edge detection techniques. The ratio of vessel to vessel free area classified the normal and DR affected retinal images more significantly than other texture based features. FPGA based hardware synthesis of Kirsch edge detection method is able to differentiate normal and diseased images with high specificity (93%). This automated segmentation of retinal blood vessels system could be used in computer-assisted diagnosis for diabetic retinopathy screening in real-time application.
Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li
2014-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the discriminatory power of breast lesions.
Alizadeh, Mahdi; Conklin, Chris J; Middleton, Devon M; Shah, Pallav; Saksena, Sona; Krisa, Laura; Finsterbusch, Jürgen; Faro, Scott H; Mulcahey, M J; Mohamed, Feroze B
2018-04-01
Ghost artifacts are a major contributor to degradation of spinal cord diffusion tensor images. A multi-stage post-processing pipeline was designed, implemented and validated to automatically remove ghost artifacts arising from reduced field of view diffusion tensor imaging (DTI) of the pediatric spinal cord. A total of 12 pediatric subjects including 7 healthy subjects (mean age=11.34years) with no evidence of spinal cord injury or pathology and 5 patients (mean age=10.96years) with cervical spinal cord injury were studied. Ghost/true cords, labeled as region of interests (ROIs), in non-diffusion weighted b0 images were segmented automatically using mathematical morphological processing. Initially, 21 texture features were extracted from each segmented ROI including 5 first-order features based on the histogram of the image (mean, variance, skewness, kurtosis and entropy) and 16s-order feature vector elements, incorporating four statistical measures (contrast, correlation, homogeneity and energy) calculated from co-occurrence matrices in directions of 0°, 45°, 90° and 135°. Next, ten features with a high value of mutual information (MI) relative to the pre-defined target class and within the features were selected as final features which were input to a trained classifier (adaptive neuro-fuzzy interface system) to separate the true cord from the ghost cord. The implemented pipeline was successfully able to separate the ghost artifacts from true cord structures. The results obtained from the classifier showed a sensitivity of 91%, specificity of 79%, and accuracy of 84% in separating the true cord from ghost artifacts. The results show that the proposed method is promising for the automatic detection of ghost cords present in DTI images of the spinal cord. This step is crucial towards development of accurate, automatic DTI spinal cord post processing pipelines. Copyright © 2017 Elsevier Inc. All rights reserved.
An explorative childhood pneumonia analysis based on ultrasonic imaging texture features
NASA Astrophysics Data System (ADS)
Zenteno, Omar; Diaz, Kristians; Lavarello, Roberto; Zimic, Mirko; Correa, Malena; Mayta, Holger; Anticona, Cynthia; Pajuelo, Monica; Oberhelman, Richard; Checkley, William; Gilman, Robert H.; Figueroa, Dante; Castañeda, Benjamín.
2015-12-01
According to World Health Organization, pneumonia is the respiratory disease with the highest pediatric mortality rate accounting for 15% of all deaths of children under 5 years old worldwide. The diagnosis of pneumonia is commonly made by clinical criteria with support from ancillary studies and also laboratory findings. Chest imaging is commonly done with chest X-rays and occasionally with a chest CT scan. Lung ultrasound is a promising alternative for chest imaging; however, interpretation is subjective and requires adequate training. In the present work, a two-class classification algorithm based on four Gray-level co-occurrence matrix texture features (i.e., Contrast, Correlation, Energy and Homogeneity) extracted from lung ultrasound images from children aged between six months and five years is presented. Ultrasound data was collected using a L14-5/38 linear transducer. The data consisted of 22 positive- and 68 negative-diagnosed B-mode cine-loops selected by a medical expert and captured in the facilities of the Instituto Nacional de Salud del Niño (Lima, Peru), for a total number of 90 videos obtained from twelve children diagnosed with pneumonia. The classification capacity of each feature was explored independently and the optimal threshold was selected by a receiver operator characteristic (ROC) curve analysis. In addition, a principal component analysis was performed to evaluate the combined performance of all the features. Contrast and correlation resulted the two more significant features. The classification performance of these two features by principal components was evaluated. The results revealed 82% sensitivity, 76% specificity, 78% accuracy and 0.85 area under the ROC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Rossi, P; Jani, A
Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage.more » During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful tool for image-guided interventions in prostate-cancer diagnosis and treatment. This research is supported in part by DOD PCRP Award W81XWH-13-1-0269, and National Cancer Institute (NCI) Grant CA114313.« less
A new Hessian - based approach for segmentation of CT porous media images
NASA Astrophysics Data System (ADS)
Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Kirill, Gerke
2017-04-01
Hessian matrix based methods are widely used in image analysis for features detection, e.g., detection of blobs, corners and edges. Hessian matrix of the imageis the matrix of 2nd order derivate around selected voxel. Most significant features give highest values of Hessian transform and lowest values are located at smoother parts of the image. Majority of conventional segmentation techniques can segment out cracks, fractures and other inhomogeneities in soils and rocks only if the rest of the image is significantly "oversigmented". To avoid this disadvantage, we propose to enhance greyscale values of voxels belonging to such specific inhomogeneities on X-ray microtomography scans. We have developed and implemented in code a two-step approach to attack the aforementioned problem. During the first step we apply a filter that enhances the image and makes outstanding features more sharply defined. During the second step we apply Hessian filter based segmentation. The values of voxels on the image to be segmented are calculated in conjunction with the values of other voxels within prescribed region. Contribution from each voxel within such region is computed by weighting according to the local Hessian matrix value. We call this approach as Hessian windowed segmentation. Hessian windowed segmentation has been tested on different porous media X-ray microtomography images, including soil, sandstones, carbonates and shales. We also compared this new method against others widely used methods such as kriging, Markov random field, converging active contours and region grow. We show that our approach is more accurate in regions containing special features such as small cracks, fractures, elongated inhomogeneities and other features with low contrast related to the background solid phase. Moreover, Hessian windowed segmentation outperforms some of these methods in computational efficiency. We further test our segmentation technique by computing permeability of segmented images and comparing them against laboratory based measurements. This work was partially supported by RFBR grant 15-34-20989 (X-ray tomography and image fusion) and RSF grant 14-17-00658 (image segmentation and pore-scale modelling).
Training of polyp staging systems using mixed imaging modalities.
Wimmer, Georg; Gadermayr, Michael; Kwitt, Roland; Häfner, Michael; Tamaki, Toru; Yoshida, Shigeto; Tanaka, Shinji; Merhof, Dorit; Uhl, Andreas
2018-05-04
In medical image data sets, the number of images is usually quite small. The small number of training samples does not allow to properly train classifiers which leads to massive overfitting to the training data. In this work, we investigate whether increasing the number of training samples by merging datasets from different imaging modalities can be effectively applied to improve predictive performance. Further, we investigate if the extracted features from the employed image representations differ between different imaging modalities and if domain adaption helps to overcome these differences. We employ twelve feature extraction methods to differentiate between non-neoplastic and neoplastic lesions. Experiments are performed using four different classifier training strategies, each with a different combination of training data. The specifically designed setup for these experiments enables a fair comparison between the four training strategies. Combining high definition with high magnification training data and chromoscopic with non-chromoscopic training data partly improved the results. The usage of domain adaptation has only a small effect on the results compared to just using non-adapted training data. Merging datasets from different imaging modalities turned out to be partially beneficial for the case of combining high definition endoscopic data with high magnification endoscopic data and for combining chromoscopic with non-chromoscopic data. NBI and chromoendoscopy on the other hand are mostly too different with respect to the extracted features to combine images of these two modalities for classifier training. Copyright © 2018 Elsevier Ltd. All rights reserved.
Recurrent neural networks for breast lesion classification based on DCE-MRIs
NASA Astrophysics Data System (ADS)
Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen
2018-02-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).
Automated diagnosis of dry eye using infrared thermography images
NASA Astrophysics Data System (ADS)
Acharya, U. Rajendra; Tan, Jen Hong; Koh, Joel E. W.; Sudarshan, Vidya K.; Yeo, Sharon; Too, Cheah Loon; Chua, Chua Kuang; Ng, E. Y. K.; Tong, Louis
2015-07-01
Dry Eye (DE) is a condition of either decreased tear production or increased tear film evaporation. Prolonged DE damages the cornea causing the corneal scarring, thinning and perforation. There is no single uniform diagnosis test available to date; combinations of diagnostic tests are to be performed to diagnose DE. The current diagnostic methods available are subjective, uncomfortable and invasive. Hence in this paper, we have developed an efficient, fast and non-invasive technique for the automated identification of normal and DE classes using infrared thermography images. The features are extracted from nonlinear method called Higher Order Spectra (HOS). Features are ranked using t-test ranking strategy. These ranked features are fed to various classifiers namely, K-Nearest Neighbor (KNN), Nave Bayesian Classifier (NBC), Decision Tree (DT), Probabilistic Neural Network (PNN), and Support Vector Machine (SVM) to select the best classifier using minimum number of features. Our proposed system is able to identify the DE and normal classes automatically with classification accuracy of 99.8%, sensitivity of 99.8%, and specificity if 99.8% for left eye using PNN and KNN classifiers. And we have reported classification accuracy of 99.8%, sensitivity of 99.9%, and specificity if 99.4% for right eye using SVM classifier with polynomial order 2 kernel.
Optimum ArFi laser bandwidth for 10nm node logic imaging performance
NASA Astrophysics Data System (ADS)
Alagna, Paolo; Zurita, Omar; Timoshkov, Vadim; Wong, Patrick; Rechtsteiner, Gregory; Baselmans, Jan; Mailfert, Julien
2015-03-01
Lithography process window (PW) and CD uniformity (CDU) requirements are being challenged with scaling across all device types. Aggressive PW and yield specifications put tight requirements on scanner performance, especially on focus budgets resulting in complicated systems for focus control. In this study, an imec N10 Logic-type test vehicle was used to investigate the E95 bandwidth impact on six different Metal 1 Logic features. The imaging metrics that track the impact of light source E95 bandwidth on performance of hot spots are: process window (PW), line width roughness (LWR), and local critical dimension uniformity (LCDU). In the first section of this study, the impact of increasing E95 bandwidth was investigated to observe the lithographic process control response of the specified logic features. In the second section, a preliminary assessment of the impact of lower E95 bandwidth was performed. The impact of lower E95 bandwidth on local intensity variability was monitored through the CDU of line end features and the LWR power spectral density (PSD) of line/space patterns. The investigation found that the imec N10 test vehicle (with OPC optimized for standard E95 bandwidth of300fm) features exposed at 200fm showed pattern specific responses, suggesting areas of potential interest for further investigation.
pH induced contrast in viscoelasticity imaging of biopolymers
Yapp, R D; Insana, M F
2009-01-01
Understanding contrast mechanisms and identifying discriminating features is at the heart of diagnostic imaging development. This report focuses on how pH influences the viscoelastic properties of biopolymers to better understand the effects of extracellular pH on breast tumour elasticity imaging. Extracellular pH is known to decrease as much as 1 pH unit in breast tumours, thus creating a dangerous environment that increases cellular mutatation rates and therapeutic resistance. We used a gelatin hydrogel phantom to isolate the effects of pH on a polymer network with similarities to the extracellular matrix in breast stroma. Using compressive unconfined creep and stress relaxation measurements, we systematically measured the viscoelastic features sensitive to pH by way of time domain models and complex modulus analysis. These results are used to determine the sensitivity of quasi-static ultrasonic elasticity imaging to pH. We found a strong elastic response of the polymer network to pH, such that the matrix stiffness decreases as pH was reduced, however the viscous response of the medium to pH was negligible. While physiological features of breast stroma such as proteoglycans and vascular networks are not included in our hydrogel model, observations in this study provide insight into viscoelastic features specific to pH changes in the collagenous stromal network. These observations suggest that the large contrast common in breast tumours with desmoplasia may be reduced under acidic conditions, and that viscoelastic features are unlikely to improve discriminability. PMID:19174599
Natural image classification driven by human brain activity
NASA Astrophysics Data System (ADS)
Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao
2016-03-01
Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.
Visual scan-path analysis with feature space transient fixation moments
NASA Astrophysics Data System (ADS)
Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong
2003-05-01
The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.
NASA Astrophysics Data System (ADS)
Chirra, Prathyush; Leo, Patrick; Yim, Michael; Bloch, B. Nicolas; Rastinehad, Ardeshir R.; Purysko, Andrei; Rosen, Mark; Madabhushi, Anant; Viswanath, Satish
2018-02-01
The recent advent of radiomics has enabled the development of prognostic and predictive tools which use routine imaging, but a key question that still remains is how reproducible these features may be across multiple sites and scanners. This is especially relevant in the context of MRI data, where signal intensity values lack tissue specific, quantitative meaning, as well as being dependent on acquisition parameters (magnetic field strength, image resolution, type of receiver coil). In this paper we present the first empirical study of the reproducibility of 5 different radiomic feature families in a multi-site setting; specifically, for characterizing prostate MRI appearance. Our cohort comprised 147 patient T2w MRI datasets from 4 different sites, all of which were first pre-processed to correct acquisition-related for artifacts such as bias field, differing voxel resolutions, as well as intensity drift (non-standardness). 406 3D voxel wise radiomic features were extracted and evaluated in a cross-site setting to determine how reproducible they were within a relatively homogeneous non-tumor tissue region; using 2 different measures of reproducibility: Multivariate Coefficient of Variation and Instability Score. Our results demonstrated that Haralick features were most reproducible between all 4 sites. By comparison, Laws features were among the least reproducible between sites, as well as performing highly variably across their entire parameter space. Similarly, the Gabor feature family demonstrated good cross-site reproducibility, but for certain parameter combinations alone. These trends indicate that despite extensive pre-processing, only a subset of radiomic features and associated parameters may be reproducible enough for use within radiomics-based machine learning classifier schemes.
Sakalauskienė, K; Valiukevičienė, S; Raišutis, R; Linkevičiūtė, G
2018-05-23
Cutaneous melanoma is a melanocytic skin tumour, which has very poor prognosis while it is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the analysis of spectrophotometric (SIAscope) images can provide the information about skin tumour thickness. The intensity of blood displacement, "erythematous blush", collagen holes, intensity of collagen, dermal and epidermal melanin were estimated in SIAgraphs. Tumour thicknesses were evaluated non-invasively in ultrasound images before excision. The diagnosis and Breslow index of each tumour were evaluated during routine histological examination. The logistic regression analysis of two thicknesses groups of melanocytic tumours (≤1 mm, n = 72 and >1 mm, n = 30), using six SIAscopic features lead to achieve the areas under the ROC curves of 0.9 and 0.96 respectively. Overall the sensitivity and specificity of SIAscopy observed in this study is 81.4% and 86.4% respectively. The features of SIAgraphs individually are not enough specific for melanoma diagnosis with different thickness. Promising results were observed for differentiation of melanocytic skin tumour, using all 6 SIAscopic features, which correspond to the distribution, location and concentration of skin chromophores. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Ship Detection in Optical Satellite Image Based on RX Method and PCAnet
NASA Astrophysics Data System (ADS)
Shao, Xiu; Li, Huali; Lin, Hui; Kang, Xudong; Lu, Ting
2017-12-01
In this paper, we present a novel method for ship detection in optical satellite image based on the ReedXiaoli (RX) method and the principal component analysis network (PCAnet). The proposed method consists of the following three steps. First, the spatially adjacent pixels in optical image are arranged into a vector, transforming the optical image into a 3D cube image. By taking this process, the contextual information of the spatially adjacent pixels can be integrated to magnify the discrimination between ship and background. Second, the RX anomaly detection method is adopted to preliminarily extract ship candidates from the produced 3D cube image. Finally, real ships are further confirmed among ship candidates by applying the PCAnet and the support vector machine (SVM). Specifically, the PCAnet is a simple deep learning network which is exploited to perform feature extraction, and the SVM is applied to achieve feature pooling and decision making. Experimental results demonstrate that our approach is effective in discriminating between ships and false alarms, and has a good ship detection performance.
Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu
2017-07-01
In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
2014-06-01
Purpose: The aim of this study was to explore the characteristics derived from 18F-fluorodeoxyglucose (18F-FDG) PET image and assess its capacity in staging of esophageal squamous cell carcinoma (ESCC). Methods: 26 patients with newly diagnosed ESCC who underwent 18F-FDG PET scan were included in this study. Different image-derived indices including the standardized uptake value (SUV), gross tumor length, texture features and shape feature were considered. Taken the histopathologic examination as the gold standard, the extracted capacities of indices in staging of ESCC were assessed by Kruskal-Wallis test and Mann-Whitney test. Specificity and sensitivity for each of the studied parameters weremore » derived using receiver-operating characteristic curves. Results: 18F-FDG SUVmax and SUVmean showed statistically significant capability in AJCC and TNM stages. Texture features such as ENT and CORR were significant factors for N stages(p=0.040, p=0.029). Both FDG PET Longitudinal length and shape feature Eccentricity (EC) (p≤0.010) provided powerful stratification in the primary ESCC AJCC and TNM stages than SUV and texture features. Receiver-operating-characteristic curve analysis showed that tumor textural analysis can capability M stages with higher sensitivity than SUV measurement but lower in T and N stages. Conclusion: The 18F-FDG image-derived characteristics of SUV, textural features and shape feature allow for good stratification AJCC and TNM stage in ESCC patients.« less
Thyroid nodule ultrasound: technical advances and future horizons.
McQueen, Andrew S; Bhatia, Kunwar S S
2015-04-01
Thyroid nodules are extremely common and the vast majority are non-malignant; therefore the accurate discrimination of a benign lesion from malignancy is challenging. Ultrasound (US) characterisation has become the key component of many thyroid nodule guidelines and is primarily based on the detection of key features by high-resolution US. The thyroid imager should be familiar with the strengths and limitations of this modality and understand the technical factors that create and alter the imaging characteristics. Specific advances in high-resolution US are discussed with reference to individual features of thyroid cancer and benign disease. Potential roles for three-dimensional thyroid ultrasound and computer-aided diagnosis are also considered. The second section provides an overview of current evidence regarding thyroid ultrasound elastography (USE). USE is a novel imaging technique that quantifies tissue elasticity (stiffness) non-invasively and has potential utility because cancers cause tissue stiffening. In recent years, there has been much research into the value of thyroid USE for distinguishing benign and malignant nodules. Preliminary findings from multiple pilot studies and meta-analyses are promising and suggest that USE can augment the anatomical detail provided by high-resolution US. However, a definite role remains controversial and is discussed. • High-resolution US characterises thyroid nodules by demonstration of specific anatomical features • Technical advances heavily influence the key US features of thyroid nodules • Most papillary carcinomas appear stiffer than benign thyroid nodules on US elastography (USE) • Thyroid USE is controversial because of variation in the reported accuracies for malignancy • Combined grey-scale US/USE may lower the FNAC rate in benign nodules.
Zhang, Kai; Long, Erping; Cui, Jiangtao; Zhu, Mingmin; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni
2017-01-01
Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN). First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets. These datasets are fed into the CNN to extract high-level features and implement automatic classification and grading. To demonstrate the performance and effectiveness of the deep features extracted in the CNN, we investigate the features combined with support vector machine (SVM) and softmax classifier and compare these with the traditional representative methods. The qualitative and quantitative experimental results demonstrate that our proposed method offers exceptional mean accuracy, sensitivity and specificity: classification (97.07%, 97.28%, and 96.83%) and a three-degree grading area (89.02%, 86.63%, and 90.75%), density (92.68%, 91.05%, and 93.94%) and location (89.28%, 82.70%, and 93.08%). Finally, we developed and deployed a potential automatic diagnostic software for ophthalmologists and patients in clinical applications to implement the validated model. PMID:28306716
Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks
Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni
2015-01-01
Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298
Koh, Joel E W; Acharya, U Rajendra; Hagiwara, Yuki; Raghavendra, U; Tan, Jen Hong; Sree, S Vinitha; Bhandary, Sulatha V; Rao, A Krishna; Sivaprasad, Sobha; Chua, Kuang Chua; Laude, Augustinus; Tong, Louis
2017-05-01
Vision is paramount to humans to lead an active personal and professional life. The prevalence of ocular diseases is rising, and diseases such as glaucoma, Diabetic Retinopathy (DR) and Age-related Macular Degeneration (AMD) are the leading causes of blindness in developed countries. Identifying these diseases in mass screening programmes is time-consuming, labor-intensive and the diagnosis can be subjective. The use of an automated computer aided diagnosis system will reduce the time taken for analysis and will also reduce the inter-observer subjective variabilities in image interpretation. In this work, we propose one such system for the automatic classification of normal from abnormal (DR, AMD, glaucoma) images. We had a total of 404 normal and 1082 abnormal fundus images in our database. As the first step, 2D-Continuous Wavelet Transform (CWT) decomposition on the fundus images of two classes was performed. Subsequently, energy features and various entropies namely Yager, Renyi, Kapoor, Shannon, and Fuzzy were extracted from the decomposed images. Then, adaptive synthetic sampling approach was applied to balance the normal and abnormal datasets. Next, the extracted features were ranked according to the significances using Particle Swarm Optimization (PSO). Thereupon, the ranked and selected features were used to train the random forest classifier using stratified 10-fold cross validation. Overall, the proposed system presented a performance rate of 92.48%, and a sensitivity and specificity of 89.37% and 95.58% respectively using 15 features. This novel system shows promise in detecting abnormal fundus images, and hence, could be a valuable adjunct eye health screening tool that could be employed in polyclinics, and thereby reduce the workload of specialists at hospitals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
Automated characterization of diabetic foot using nonlinear features extracted from thermograms
NASA Astrophysics Data System (ADS)
Adam, Muhammad; Ng, Eddie Y. K.; Oh, Shu Lih; Heng, Marabelle L.; Hagiwara, Yuki; Tan, Jen Hong; Tong, Jasper W. K.; Acharya, U. Rajendra
2018-03-01
Diabetic foot is a major complication of diabetes mellitus (DM). The blood circulation to the foot decreases due to DM and hence, the temperature reduces in the plantar foot. Thermography is a non-invasive imaging method employed to view the thermal patterns using infrared (IR) camera. It allows qualitative and visual documentation of temperature fluctuation in vascular tissues. But it is difficult to diagnose these temperature changes manually. Thus, computer assisted diagnosis (CAD) system may help to accurately detect diabetic foot to prevent traumatic outcomes such as ulcerations and lower extremity amputation. In this study, plantar foot thermograms of 33 healthy persons and 33 individuals with type 2 diabetes are taken. These foot images are decomposed using discrete wavelet transform (DWT) and higher order spectra (HOS) techniques. Various texture and entropy features are extracted from the decomposed images. These combined (DWT + HOS) features are ranked using t-values and classified using support vector machine (SVM) classifier. Our proposed methodology achieved maximum accuracy of 89.39%, sensitivity of 81.81% and specificity of 96.97% using only five features. The performance of the proposed thermography-based CAD system can help the clinicians to take second opinion on their diagnosis of diabetic foot.
Infrared moving small target detection based on saliency extraction and image sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie
2016-10-01
Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.
Image Recognition and Feature Detection in Solar Physics
NASA Astrophysics Data System (ADS)
Martens, Petrus C.
2012-05-01
The Solar Dynamics Observatory (SDO) data repository will dwarf the archives of all previous solar physics missions put together. NASA recognized early on that the traditional methods of analyzing the data -- solar scientists and grad students in particular analyzing the images by hand -- would simply not work and tasked our Feature Finding Team (FFT) with developing automated feature recognition modules for solar events and phenomena likely to be observed by SDO. Having these metadata available on-line will enable solar scientist to conduct statistical studies involving large sets of events that would be impossible now with traditional means. We have followed a two-track approach in our project: we have been developing some existing task-specific solar feature finding modules to be "pipe-line" ready for the stream of SDO data, plus we are designing a few new modules. Secondly, we took it upon us to develop an entirely new "trainable" module that would be capable of identifying different types of solar phenomena starting from a limited number of user-provided examples. Both approaches are now reaching fruition, and I will show examples and movies with results from several of our feature finding modules. In the second part of my presentation I will focus on our “trainable” module, which is the most innovative in character. First, there is the strong similarity between solar and medical X-ray images with regard to their texture, which has allowed us to apply some advances made in medical image recognition. Second, we have found that there is a strong similarity between the way our trainable module works and the way our brain recognizes images. The brain can quickly recognize similar images from key characteristics, just as our code does. We conclude from that that our approach represents the beginning of a more human-like procedure for computer image recognition.
Mass Spectrometry Imaging, an Emerging Technology in Neuropsychopharmacology
Shariatgorji, Mohammadreza; Svenningsson, Per; Andrén, Per E
2014-01-01
Mass spectrometry imaging is a powerful tool for directly determining the distribution of proteins, peptides, lipids, neurotransmitters, metabolites and drugs in neural tissue sections in situ. Molecule-specific imaging can be achieved using various ionization techniques that are suited to different applications but which all yield data with high mass accuracies and spatial resolutions. The ability to simultaneously obtain images showing the distributions of chemical species ranging from metal ions to macromolecules makes it possible to explore the chemical organization of a sample and to correlate the results obtained with specific anatomical features. The imaging of biomolecules has provided new insights into multiple neurological diseases, including Parkinson's and Alzheimer's disease. Mass spectrometry imaging can also be used in conjunction with other imaging techniques in order to identify correlations between changes in the distribution of important chemical species and other changes in the properties of the tissue. Here we review the applications of mass spectrometry imaging in neuroscience research and discuss its potential. The results presented demonstrate that mass spectrometry imaging is a useful experimental method with diverse applications in neuroscience. PMID:23966069
Mass spectrometry imaging, an emerging technology in neuropsychopharmacology.
Shariatgorji, Mohammadreza; Svenningsson, Per; Andrén, Per E
2014-01-01
Mass spectrometry imaging is a powerful tool for directly determining the distribution of proteins, peptides, lipids, neurotransmitters, metabolites and drugs in neural tissue sections in situ. Molecule-specific imaging can be achieved using various ionization techniques that are suited to different applications but which all yield data with high mass accuracies and spatial resolutions. The ability to simultaneously obtain images showing the distributions of chemical species ranging from metal ions to macromolecules makes it possible to explore the chemical organization of a sample and to correlate the results obtained with specific anatomical features. The imaging of biomolecules has provided new insights into multiple neurological diseases, including Parkinson's and Alzheimer's disease. Mass spectrometry imaging can also be used in conjunction with other imaging techniques in order to identify correlations between changes in the distribution of important chemical species and other changes in the properties of the tissue. Here we review the applications of mass spectrometry imaging in neuroscience research and discuss its potential. The results presented demonstrate that mass spectrometry imaging is a useful experimental method with diverse applications in neuroscience.
Automated classification of immunostaining patterns in breast tissue from the human protein atlas.
Swamidoss, Issac Niwas; Kårsnäs, Andreas; Uhlmann, Virginie; Ponnusamy, Palanisamy; Kampf, Caroline; Simonsson, Martin; Wählby, Carolina; Strand, Robin
2013-01-01
The Human Protein Atlas (HPA) is an effort to map the location of all human proteins (http://www.proteinatlas.org/). It contains a large number of histological images of sections from human tissue. Tissue micro arrays (TMA) are imaged by a slide scanning microscope, and each image represents a thin slice of a tissue core with a dark brown antibody specific stain and a blue counter stain. When generating antibodies for protein profiling of the human proteome, an important step in the quality control is to compare staining patterns of different antibodies directed towards the same protein. This comparison is an ultimate control that the antibody recognizes the right protein. In this paper, we propose and evaluate different approaches for classifying sub-cellular antibody staining patterns in breast tissue samples. The proposed methods include the computation of various features including gray level co-occurrence matrix (GLCM) features, complex wavelet co-occurrence matrix (CWCM) features, and weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM)-inspired features. The extracted features are used into two different multivariate classifiers (support vector machine (SVM) and linear discriminant analysis (LDA) classifier). Before extracting features, we use color deconvolution to separate different tissue components, such as the brownly stained positive regions and the blue cellular regions, in the immuno-stained TMA images of breast tissue. We present classification results based on combinations of feature measurements. The proposed complex wavelet features and the WND-CHARM features have accuracy similar to that of a human expert. Both human experts and the proposed automated methods have difficulties discriminating between nuclear and cytoplasmic staining patterns. This is to a large extent due to mixed staining of nucleus and cytoplasm. Methods for quantification of staining patterns in histopathology have many applications, ranging from antibody quality control to tumor grading.
Computer-aided diagnosis of liver tumors on computed tomography images.
Chang, Chin-Chen; Chen, Hong-Hao; Chang, Yeun-Chung; Yang, Ming-Yang; Lo, Chung-Ming; Ko, Wei-Chun; Lee, Yee-Fan; Liu, Kao-Lang; Chang, Ruey-Feng
2017-07-01
Liver cancer is the tenth most common cancer in the USA, and its incidence has been increasing for several decades. Early detection, diagnosis, and treatment of the disease are very important. Computed tomography (CT) is one of the most common and robust imaging techniques for the detection of liver cancer. CT scanners can provide multiple-phase sequential scans of the whole liver. In this study, we proposed a computer-aided diagnosis (CAD) system to diagnose liver cancer using the features of tumors obtained from multiphase CT images. A total of 71 histologically-proven liver tumors including 49 benign and 22 malignant lesions were evaluated with the proposed CAD system to evaluate its performance. Tumors were identified by the user and then segmented using a region growing algorithm. After tumor segmentation, three kinds of features were obtained for each tumor, including texture, shape, and kinetic curve. The texture was quantified using 3 dimensional (3-D) texture data of the tumor based on the grey level co-occurrence matrix (GLCM). Compactness, margin, and an elliptic model were used to describe the 3-D shape of the tumor. The kinetic curve was established from each phase of tumor and represented as variations in density between each phase. Backward elimination was used to select the best combination of features, and binary logistic regression analysis was used to classify the tumors with leave-one-out cross validation. The accuracy and sensitivity for the texture were 71.82% and 68.18%, respectively, which were better than for the shape and kinetic curve under closed specificity. Combining all of the features achieved the highest accuracy (58/71, 81.69%), sensitivity (18/22, 81.82%), and specificity (40/49, 81.63%). The Az value of combining all features was 0.8713. Combining texture, shape, and kinetic curve features may be able to differentiate benign from malignant tumors in the liver using our proposed CAD system. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja
2012-03-01
Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. Solving this issue could provide criminal investigators with the specific time a fingerprint trace was left on a surface, and therefore would enable them to link potential suspects to the time a crime took place as well as to reconstruct the sequence of events or eliminate irrelevant fingerprints to ensure privacy constraints. Transferring imaging techniques from different application areas, such as 3D image acquisition, surface measurement and chemical analysis to the domain of lifting latent biometric fingerprint traces is an upcoming trend in forensics. Such non-destructive sensor devices might help to solve the challenge of determining the age of a latent fingerprint trace, since it provides the opportunity to create time series and process them using pattern recognition techniques and statistical methods on digitized 2D, 3D and chemical data, rather than classical, contact-based capturing techniques, which alter the fingerprint trace and therefore make continuous scans impossible. In prior work, we have suggested to use a feature called binary pixel, which is a novel approach in the working field of fingerprint age determination. The feature uses a Chromatic White Light (CWL) image sensor to continuously scan a fingerprint trace over time and retrieves a characteristic logarithmic aging tendency for 2D-intensity as well as 3D-topographic images from the sensor. In this paper, we propose to combine such two characteristic aging features with other 2D and 3D features from the domains of surface measurement, microscopy, photography and spectroscopy, to achieve an increase in accuracy and reliability of a potential future age determination scheme. Discussing the feasibility of such variety of sensor devices and possible aging features, we propose a general fusion approach, which might combine promising features to a joint age determination scheme in future. We furthermore demonstrate the feasibility of the introduced approach by exemplary fusing the binary pixel features based on 2D-intensity and 3D-topographic images of the mentioned CWL sensor. We conclude that a formula based age determination approach requires very precise image data, which cannot be achieved at the moment, whereas a machine learning based classification approach seems to be feasible, if an adequate amount of features can be provided.
Modeling first impressions from highly variable facial images.
Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom
2014-08-12
First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.
NASA Astrophysics Data System (ADS)
Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing
2016-06-01
Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.
Cellucci, Tania; Tyrrell, Pascal N; Twilt, Marinka; Sheikh, Shehla; Benseler, Susanne M
2014-03-01
To identify distinct clusters of children with inflammatory brain diseases based on clinical, laboratory, and imaging features at presentation, to assess which features contribute strongly to the development of clusters, and to compare additional features between the identified clusters. A single-center cohort study was performed with children who had been diagnosed as having an inflammatory brain disease between June 1, 1989 and December 31, 2010. Demographic, clinical, laboratory, neuroimaging, and histologic data at diagnosis were collected. K-means cluster analysis was performed to identify clusters of patients based on their presenting features. Associations between the clusters and patient variables, such as diagnoses, were determined. A total of 147 children (50% female; median age 8.8 years) were identified: 105 with primary central nervous system (CNS) vasculitis, 11 with secondary CNS vasculitis, 8 with neuronal antibody syndromes, 6 with postinfectious syndromes, and 17 with other inflammatory brain diseases. Three distinct clusters were identified. Paresis and speech deficits were the most common presenting features in cluster 1. Children in cluster 2 were likely to present with behavior changes, cognitive dysfunction, and seizures, while those in cluster 3 experienced ataxia, vision abnormalities, and seizures. Lesions seen on T2/fluid-attenuated inversion recovery sequences of magnetic resonance imaging were common in all clusters, but unilateral ischemic lesions were more prominent in cluster 1. The clusters were associated with specific diagnoses and diagnostic test results. Children with inflammatory brain diseases presented with distinct phenotypical patterns that are associated with specific diagnoses. This information may inform the development of a diagnostic classification of childhood inflammatory brain diseases and suggest that specific pathways of diagnostic evaluation are warranted. Copyright © 2014 by the American College of Rheumatology.
Research on vehicle detection based on background feature analysis in SAR images
NASA Astrophysics Data System (ADS)
Zhang, Bochuan; Tang, Bo; Zhang, Cong; Hu, Ruiguang; Yun, Hongquan; Xiao, Liping
2017-10-01
Aiming at vehicle detection on the ground through low resolution SAR images, a method is proposed for determining the region of the vehicles first and then detecting the target in the specific region. The experimental results show that this method not only reduces the target detection area, but also reduces the influence of terrain clutter on the detection, which greatly improves the reliability of the target detection.
Classification of wet aged related macular degeneration using optical coherence tomographic images
NASA Astrophysics Data System (ADS)
Haq, Anam; Mir, Fouwad Jamil; Yasin, Ubaid Ullah; Khan, Shoab A.
2013-12-01
Wet Age related macular degeneration (AMD) is a type of age related macular degeneration. In order to detect Wet AMD we look for Pigment Epithelium detachment (PED) and fluid filled region caused by choroidal neovascularization (CNV). This form of AMD can cause vision loss if not treated in time. In this article we have proposed an automated system for detection of Wet AMD in Optical coherence tomographic (OCT) images. The proposed system extracts PED and CNV from OCT images using segmentation and morphological operations and then detailed feature set are extracted. These features are then passed on to the classifier for classification. Finally performance measures like accuracy, sensitivity and specificity are calculated and the classifier delivering the maximum performance is selected as a comparison measure. Our system gives higher performance using SVM as compared to other methods.
Variations in algorithm implementation among quantitative texture analysis software packages
NASA Astrophysics Data System (ADS)
Foy, Joseph J.; Mitta, Prerana; Nowosatka, Lauren R.; Mendel, Kayla R.; Li, Hui; Giger, Maryellen L.; Al-Hallaq, Hania; Armato, Samuel G.
2018-02-01
Open-source texture analysis software allows for the advancement of radiomics research. Variations in texture features, however, result from discrepancies in algorithm implementation. Anatomically matched regions of interest (ROIs) that captured normal breast parenchyma were placed in the magnetic resonance images (MRI) of 20 patients at two time points. Six first-order features and six gray-level co-occurrence matrix (GLCM) features were calculated for each ROI using four texture analysis packages. Features were extracted using package-specific default GLCM parameters and using GLCM parameters modified to yield the greatest consistency among packages. Relative change in the value of each feature between time points was calculated for each ROI. Distributions of relative feature value differences were compared across packages. Absolute agreement among feature values was quantified by the intra-class correlation coefficient. Among first-order features, significant differences were found for max, range, and mean, and only kurtosis showed poor agreement. All six second-order features showed significant differences using package-specific default GLCM parameters, and five second-order features showed poor agreement; with modified GLCM parameters, no significant differences among second-order features were found, and all second-order features showed poor agreement. While relative texture change discrepancies existed across packages, these differences were not significant when consistent parameters were used.
Imaging brain tumour microstructure.
Nilsson, Markus; Englund, Elisabet; Szczepankiewicz, Filip; van Westen, Danielle; Sundgren, Pia C
2018-05-08
Imaging is an indispensable tool for brain tumour diagnosis, surgical planning, and follow-up. Definite diagnosis, however, often demands histopathological analysis of microscopic features of tissue samples, which have to be obtained by invasive means. A non-invasive alternative may be to probe corresponding microscopic tissue characteristics by MRI, or so called 'microstructure imaging'. The promise of microstructure imaging is one of 'virtual biopsy' with the goal to offset the need for invasive procedures in favour of imaging that can guide pre-surgical planning and can be repeated longitudinally to monitor and predict treatment response. The exploration of such methods is motivated by the striking link between parameters from MRI and tumour histology, for example the correlation between the apparent diffusion coefficient and cellularity. Recent microstructure imaging techniques probe even more subtle and specific features, providing parameters associated to cell shape, size, permeability, and volume distributions. However, the range of scenarios in which these techniques provide reliable imaging biomarkers that can be used to test medical hypotheses or support clinical decisions is yet unknown. Accurate microstructure imaging may moreover require acquisitions that go beyond conventional data acquisition strategies. This review covers a wide range of candidate microstructure imaging methods based on diffusion MRI and relaxometry, and explores advantages, challenges, and potential pitfalls in brain tumour microstructure imaging. Copyright © 2018. Published by Elsevier Inc.
Smart, Otis; Burrell, Lauren
2014-01-01
Pattern classification for intracranial electroencephalogram (iEEG) and functional magnetic resonance imaging (fMRI) signals has furthered epilepsy research toward understanding the origin of epileptic seizures and localizing dysfunctional brain tissue for treatment. Prior research has demonstrated that implicitly selecting features with a genetic programming (GP) algorithm more effectively determined the proper features to discern biomarker and non-biomarker interictal iEEG and fMRI activity than conventional feature selection approaches. However for each the iEEG and fMRI modalities, it is still uncertain whether the stochastic properties of indirect feature selection with a GP yield (a) consistent results within a patient data set and (b) features that are specific or universal across multiple patient data sets. We examined the reproducibility of implicitly selecting features to classify interictal activity using a GP algorithm by performing several selection trials and subsequent frequent itemset mining (FIM) for separate iEEG and fMRI epilepsy patient data. We observed within-subject consistency and across-subject variability with some small similarity for selected features, indicating a clear need for patient-specific features and possible need for patient-specific feature selection or/and classification. For the fMRI, using nearest-neighbor classification and 30 GP generations, we obtained over 60% median sensitivity and over 60% median selectivity. For the iEEG, using nearest-neighbor classification and 30 GP generations, we obtained over 65% median sensitivity and over 65% median selectivity except one patient. PMID:25580059
Recognizing Materials using Perceptually Inspired Features
Sharan, Lavanya; Liu, Ce; Rosenholtz, Ruth; Adelson, Edward H.
2013-01-01
Our world consists not only of objects and scenes but also of materials of various kinds. Being able to recognize the materials that surround us (e.g., plastic, glass, concrete) is important for humans as well as for computer vision systems. Unfortunately, materials have received little attention in the visual recognition literature, and very few computer vision systems have been designed specifically to recognize materials. In this paper, we present a system for recognizing material categories from single images. We propose a set of low and mid-level image features that are based on studies of human material recognition, and we combine these features using an SVM classifier. Our system outperforms a state-of-the-art system [Varma and Zisserman, 2009] on a challenging database of real-world material categories [Sharan et al., 2009]. When the performance of our system is compared directly to that of human observers, humans outperform our system quite easily. However, when we account for the local nature of our image features and the surface properties they measure (e.g., color, texture, local shape), our system rivals human performance. We suggest that future progress in material recognition will come from: (1) a deeper understanding of the role of non-local surface properties (e.g., extended highlights, object identity); and (2) efforts to model such non-local surface properties in images. PMID:23914070
Kostopoulos, Spiros A; Asvestas, Pantelis A; Kalatzis, Ioannis K; Sakellaropoulos, George C; Sakkis, Theofilos H; Cavouras, Dionisis A; Glotsos, Dimitris T
2017-09-01
The aim of this study was to propose features that evaluate pictorial differences between melanocytic nevus (mole) and melanoma lesions by computer-based analysis of plain photography images and to design a cross-platform, tunable, decision support system to discriminate with high accuracy moles from melanomas in different publicly available image databases. Digital plain photography images of verified mole and melanoma lesions were downloaded from (i) Edinburgh University Hospital, UK, (Dermofit, 330moles/70 melanomas, under signed agreement), from 5 different centers (Multicenter, 63moles/25 melanomas, publicly available), and from the Groningen University, Netherlands (Groningen, 100moles/70 melanomas, publicly available). Images were processed for outlining the lesion-border and isolating the lesion from the surrounding background. Fourteen features were generated from each lesion evaluating texture (4), structure (5), shape (4) and color (1). Features were subjected to statistical analysis for determining differences in pictorial properties between moles and melanomas. The Probabilistic Neural Network (PNN) classifier, the exhaustive search features selection, the leave-one-out (LOO), and the external cross-validation (ECV) methods were used to design the PR-system for discriminating between moles and melanomas. Statistical analysis revealed that melanomas as compared to moles were of lower intensity, of less homogenous surface, had more dark pixels with intensities spanning larger spectra of gray-values, contained more objects of different sizes and gray-levels, had more asymmetrical shapes and irregular outlines, had abrupt intensity transitions from lesion to background tissue, and had more distinct colors. The PR-system designed by the Dermofit images scored on the Dermofit images, using the ECV, 94.1%, 82.9%, 96.5% for overall accuracy, sensitivity, specificity, on the Multicenter Images 92.0%, 88%, 93.7% and on the Groningen Images 76.2%, 73.9%, 77.8% respectively. The PR-system as designed by the Dermofit image database could be fine-tuned to classify with good accuracy plain photography moles/melanomas images of other databases employing different image capturing equipment and protocols. Copyright © 2017 Elsevier B.V. All rights reserved.
A general purpose feature extractor for light detection and ranging data.
Li, Yangming; Olson, Edwin B
2010-01-01
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.
A General Purpose Feature Extractor for Light Detection and Ranging Data
Li, Yangming; Olson, Edwin B.
2010-01-01
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset. PMID:22163474
Imaging outcomes for trials of remyelination in multiple sclerosis
Mallik, Shahrukh; Samson, Rebecca S; Wheeler-Kingshott, Claudia A M; Miller, David H
2014-01-01
Trials of potential neuroreparative agents are becoming more important in the spectrum of multiple sclerosis research. Appropriate imaging outcomes are required that are feasible from a time and practicality point of view, as well as being sensitive and specific to myelin, while also being reproducible and clinically meaningful. Conventional MRI sequences have limited specificity for myelination. We evaluate the imaging modalities which are potentially more specific to myelin content in vivo, such as magnetisation transfer ratio (MTR), restricted proton fraction f (from quantitative magnetisation transfer measurements), myelin water fraction and diffusion tensor imaging (DTI) metrics, in addition to positron emission tomography (PET) imaging. Although most imaging applications to date have focused on the brain, we also consider measures with the potential to detect remyelination in the spinal cord and in the optic nerve. At present, MTR and DTI measures probably offer the most realistic and feasible outcome measures for such trials, especially in the brain. However, no one measure currently demonstrates sufficiently high sensitivity or specificity to myelin, or correlation with clinical features, and it should be useful to employ more than one outcome to maximise understanding and interpretation of findings with these sequences. PET may be less feasible for current and near-future trials, but is a promising technique because of its specificity. In the optic nerve, visual evoked potentials can indicate demyelination and should be correlated with an imaging outcome (such as optic nerve MTR), as well as clinical measures. PMID:24769473
A general framework to learn surrogate relevance criterion for atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-09-01
Multi-atlas based image segmentation sees great opportunities in the big data era but also faces unprecedented challenges in identifying positive contributors from extensive heterogeneous data. To assess data relevance, image similarity criteria based on various image features widely serve as surrogates for the inaccessible geometric agreement criteria. This paper proposes a general framework to learn image based surrogate relevance criteria to better mimic the behaviors of segmentation based oracle geometric relevance. The validity of its general rationale is verified in the specific context of fusion set selection for image segmentation. More specifically, we first present a unified formulation for surrogate relevance criteria and model the neighborhood relationship among atlases based on the oracle relevance knowledge. Surrogates are then trained to be small for geometrically relevant neighbors and large for irrelevant remotes to the given targets. The proposed surrogate learning framework is verified in corpus callosum segmentation. The learned surrogates demonstrate superiority in inferring the underlying oracle value and selecting relevant fusion set, compared to benchmark surrogates.
Cohen, Rachel; Newton-John, Toby; Slater, Amy
2017-12-01
The present study aimed to identify the specific social networking sites (SNS) features that relate to body image concerns in young women. A total of 259 women aged 18-29years completed questionnaire measures of SNS use (Facebook and Instagram) and body image concerns. It was found that appearance-focused SNS use, rather than overall SNS use, was related to body image concerns in young women. Specifically, greater engagement in photo activities on Facebook, but not general Facebook use, was associated with greater thin-ideal internalisation and body surveillance. Similarly, following appearance-focused accounts on Instagram was associated with thin-ideal internalisation, body surveillance, and drive for thinness, whereas following appearance-neutral accounts was not associated with any body image outcomes. Implications for future SNS research, as well as for body image and disordered eating interventions for young women, are discussed. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Lin, Wen-Yen; Chou, Wen-Cheng; Chang, Po-Cheng; Chou, Chung-Chuan; Wen, Ming-Shien; Ho, Ming-Yun; Lee, Wen-Chen; Hsieh, Ming-Jer; Lin, Chung-Chih; Tsai, Tsai-Hsuan; Lee, Ming-Yih
2018-03-01
Seismocardiogram (SCG) or mechanocardiography is a noninvasive cardiac diagnostic method; however, previous studies used only a single sensor to detect cardiac mechanical activities that will not be able to identify location-specific feature points in a cardiac cycle corresponding to the four valvular auscultation locations. In this study, a multichannel SCG spectrum measurement system was proposed and examined for cardiac activity monitoring to overcome problems like, position dependency, time delay, and signal attenuation, occurring in traditional single-channel SCG systems. ECG and multichannel SCG signals were simultaneously recorded in 25 healthy subjects. Cardiac echocardiography was conducted at the same time. SCG traces were analyzed and compared with echocardiographic images for feature point identification. Fifteen feature points were identified in the corresponding SCG traces. Among them, six feature points, including left ventricular lateral wall contraction peak velocity, septal wall contraction peak velocity, transaortic peak flow, transpulmonary peak flow, transmitral ventricular relaxation flow, and transmitral atrial contraction flow were identified. These new feature points were not observed in previous studies because the single-channel SCG could not detect the location-specific signals from other locations due to time delay and signal attenuation. As the results, the multichannel SCG spectrum measurement system can record the corresponding cardiac mechanical activities with location-specific SCG signals and six new feature points were identified with the system. This new modality may help clinical diagnoses of valvular heart diseases and heart failure in the future.
Unsupervised feature learning for autonomous rock image classification
NASA Astrophysics Data System (ADS)
Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond
2017-09-01
Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.
Feature-based attention: it is all bottom-up priming.
Theeuwes, Jan
2013-10-19
Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming.
Feature-based attention: it is all bottom-up priming
Theeuwes, Jan
2013-01-01
Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming. PMID:24018717
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi
2014-02-01
This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.
Understanding Magnetic Resonance Imaging of Knee Cartilage Repair: A Focus on Clinical Relevance.
Hayashi, Daichi; Li, Xinning; Murakami, Akira M; Roemer, Frank W; Trattnig, Siegfried; Guermazi, Ali
2017-06-01
The aims of this review article are (a) to describe the principles of morphologic and compositional magnetic resonance imaging (MRI) techniques relevant for the imaging of knee cartilage repair surgery and their application to longitudinal studies and (b) to illustrate the clinical relevance of pre- and postsurgical MRI with correlation to intraoperative images. First, MRI sequences that can be applied for imaging of cartilage repair tissue in the knee are described, focusing on comparison of 2D and 3D fast spin echo and gradient recalled echo sequences. Imaging features of cartilage repair tissue are then discussed, including conventional (morphologic) MRI and compositional MRI techniques. More specifically, imaging techniques for specific cartilage repair surgery techniques as described above, as well as MRI-based semiquantitative scoring systems for the knee cartilage repair tissue-MR Observation of Cartilage Repair Tissue and Cartilage Repair OA Knee Score-are explained. Then, currently available surgical techniques are reviewed, including marrow stimulation, osteochondral autograft, osteochondral allograft, particulate cartilage allograft, autologous chondrocyte implantation, and others. Finally, ongoing research efforts and future direction of cartilage repair tissue imaging are discussed.
Wavelet analysis enables system-independent texture analysis of optical coherence tomography images.
Lingley-Papadopoulos, Colleen A; Loew, Murray H; Zara, Jason M
2009-01-01
Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.
Wavelet analysis enables system-independent texture analysis of optical coherence tomography images
NASA Astrophysics Data System (ADS)
Lingley-Papadopoulos, Colleen A.; Loew, Murray H.; Zara, Jason M.
2009-07-01
Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.
Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J.
2016-01-01
Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant’s response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807
An Efficient Method to Detect Mutual Overlap of a Large Set of Unordered Images for Structure-From
NASA Astrophysics Data System (ADS)
Wang, X.; Zhan, Z. Q.; Heipke, C.
2017-05-01
Recently, low-cost 3D reconstruction based on images has become a popular focus of photogrammetry and computer vision research. Methods which can handle an arbitrary geometric setup of a large number of unordered and convergent images are of particular interest. However, determining the mutual overlap poses a considerable challenge. We propose a new method which was inspired by and improves upon methods employing random k-d forests for this task. Specifically, we first derive features from the images and then a random k-d forest is used to find the nearest neighbours in feature space. Subsequently, the degree of similarity between individual images, the image overlaps and thus images belonging to a common block are calculated as input to a structure-from-motion (sfm) pipeline. In our experiments we show the general applicability of the new method and compare it with other methods by analyzing the time efficiency. Orientations and 3D reconstructions were successfully conducted with our overlap graphs by sfm. The results show a speed-up of a factor of 80 compared to conventional pairwise matching, and of 8 and 2 compared to the VocMatch approach using 1 and 4 CPU, respectively.
A new standard of visual data representation for imaging mass spectrometry.
O'Rourke, Matthew B; Padula, Matthew P
2017-03-01
MALDI imaging MS (IMS) is principally used for cancer diagnostics. In our own experience with publishing IMS data, we have been requested to modify our protocols with respect to the areas of the tissue that are imaged in order to comply with the wider literature. In light of this, we have determined that current methodologies lack effective controls and can potentially introduce bias by only imaging specific areas of the targeted tissue EXPERIMENTAL DESIGN: A previously imaged sample was selected and then cropped in different ways to show the potential effect of only imaging targeted areas. By using a model sample, we were able to effectively show how selective imaging of samples can misinterpret tissue features and by changing the areas that are acquired, according to our new standard, an effective internal control can be introduced. Current IMS sampling convention relies on the assumption that sample preparation has been performed correctly. This prevents users from checking whether molecules have moved beyond borders of the tissue due to delocalization and consequentially products of improper sample preparation could be interpreted as biological features that are of critical importance when encountered in a visual diagnostic. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Celluloid angels: a research study of nurses in feature films 1900-2007.
Stanley, David J
2008-10-01
This paper is a report of a study examining the influence on how nursing and nurses are portrayed in feature films made between 1900 and 2007, with a nurse as their main or a principle character and a story-line related specifically to nursing. Nurses and the nursing profession are frequently portrayed negatively or stereotypically in the media, with nurses often being portrayed as feminine and caring but not as leaders or professionals capable of autonomous practice. A mixed method approach was used to examine feature films made in the Western world. Over 36,000 feature film synopses were reviewed (via CINAHL, ProQuest and relevant movie-specific literature) for the keywords 'nurse'/'nursing'. Identified films were analysed quantitatively to determine their country of production, genre, plot(s) and other relevant data, and qualitatively to identify the emergence of themes related to the image of nurses/nursing in films. For the period from 1900 to 2007, 280 relevant feature films were identified. Most films were made in the United States of America or United Kingdom, although in recent years films have been increasingly produced in other countries. Early films portrayed nurses as self-sacrificial heroines, sex objects and romantics. More recent films increasingly portray them as strong and self-confident, professionals. Nurse-related films offer a unique insight into the image of nurses and how they have been portrayed. Nurses need to be aware of the impact the film industry has on how nurses and nursing are perceived and represented in feature films.
Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam
2016-01-01
Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2-D models and computing single organ deformations. In this study, 3-D comprehensive patient-specific non-linear biomechanical models implemented using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithms are applied to predict a 3-D deformation field for whole-body image registration. Unlike a conventional approach which requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the Fuzzy C-Means (FCM) algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. PMID:26791945
Chen, Yinsheng; Li, Zeju; Wu, Guoqing; Yu, Jinhua; Wang, Yuanyuan; Lv, Xiaofei; Ju, Xue; Chen, Zhongping
2018-07-01
Due to the totally different therapeutic regimens needed for primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM), accurate differentiation of the two diseases by noninvasive imaging techniques is important for clinical decision-making. Thirty cases of PCNSL and 66 cases of GBM with conventional T1-contrast magnetic resonance imaging (MRI) were analyzed in this study. Convolutional neural networks was used to segment tumor automatically. A modified scale invariant feature transform (SIFT) method was utilized to extract three-dimensional local voxel arrangement information from segmented tumors. Fisher vector was proposed to normalize the dimension of SIFT features. An improved genetic algorithm (GA) was used to extract SIFT features with PCNSL and GBM discrimination ability. The data-set was divided into a cross-validation cohort and an independent validation cohort by the ratio of 2:1. Support vector machine with the leave-one-out cross-validation based on 20 cases of PCNSL and 44 cases of GBM was employed to build and validate the differentiation model. Among 16,384 high-throughput features, 1356 features show significant differences between PCNSL and GBM with p < 0.05 and 420 features with p < 0.001. A total of 496 features were finally chosen by improved GA algorithm. The proposed method produces PCNSL vs. GBM differentiation with an area under the curve (AUC) curve of 99.1% (98.2%), accuracy 95.3% (90.6%), sensitivity 85.0% (80.0%) and specificity 100% (95.5%) on the cross-validation cohort (and independent validation cohort). Since the local voxel arrangement characterization provided by SIFT features, proposed method produced more competitive PCNSL and GBM differentiation performance by using conventional MRI than methods based on advanced MRI.
Rodriguez Gutierrez, D; Awwad, A; Meijer, L; Manita, M; Jaspan, T; Dineen, R A; Grundy, R G; Auer, D P
2014-05-01
Qualitative radiologic MR imaging review affords limited differentiation among types of pediatric posterior fossa brain tumors and cannot detect histologic or molecular subtypes, which could help to stratify treatment. This study aimed to improve current posterior fossa discrimination of histologic tumor type by using support vector machine classifiers on quantitative MR imaging features. This retrospective study included preoperative MRI in 40 children with posterior fossa tumors (17 medulloblastomas, 16 pilocytic astrocytomas, and 7 ependymomas). Shape, histogram, and textural features were computed from contrast-enhanced T2WI and T1WI and diffusivity (ADC) maps. Combinations of features were used to train tumor-type-specific classifiers for medulloblastoma, pilocytic astrocytoma, and ependymoma types in separation and as a joint posterior fossa classifier. A tumor-subtype classifier was also produced for classic medulloblastoma. The performance of different classifiers was assessed and compared by using randomly selected subsets of training and test data. ADC histogram features (25th and 75th percentiles and skewness) yielded the best classification of tumor type (on average >95.8% of medulloblastomas, >96.9% of pilocytic astrocytomas, and >94.3% of ependymomas by using 8 training samples). The resulting joint posterior fossa classifier correctly assigned >91.4% of the posterior fossa tumors. For subtype classification, 89.4% of classic medulloblastomas were correctly classified on the basis of ADC texture features extracted from the Gray-Level Co-Occurence Matrix. Support vector machine-based classifiers using ADC histogram features yielded very good discrimination among pediatric posterior fossa tumor types, and ADC textural features show promise for further subtype discrimination. These findings suggest an added diagnostic value of quantitative feature analysis of diffusion MR imaging in pediatric neuro-oncology. © 2014 by American Journal of Neuroradiology.
Patient-Specific Deep Architectural Model for ECG Classification
Luo, Kan; Cuschieri, Alfred
2017-01-01
Heartbeat classification is a crucial step for arrhythmia diagnosis during electrocardiographic (ECG) analysis. The new scenario of wireless body sensor network- (WBSN-) enabled ECG monitoring puts forward a higher-level demand for this traditional ECG analysis task. Previously reported methods mainly addressed this requirement with the applications of a shallow structured classifier and expert-designed features. In this study, modified frequency slice wavelet transform (MFSWT) was firstly employed to produce the time-frequency image for heartbeat signal. Then the deep learning (DL) method was performed for the heartbeat classification. Here, we proposed a novel model incorporating automatic feature abstraction and a deep neural network (DNN) classifier. Features were automatically abstracted by the stacked denoising auto-encoder (SDA) from the transferred time-frequency image. DNN classifier was constructed by an encoder layer of SDA and a softmax layer. In addition, a deterministic patient-specific heartbeat classifier was achieved by fine-tuning on heartbeat samples, which included a small subset of individual samples. The performance of the proposed model was evaluated on the MIT-BIH arrhythmia database. Results showed that an overall accuracy of 97.5% was achieved using the proposed model, confirming that the proposed DNN model is a powerful tool for heartbeat pattern recognition. PMID:29065597
Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration.
Behrisch, Michael; Bach, Benjamin; Hund, Michael; Delz, Michael; Von Ruden, Laura; Fekete, Jean-Daniel; Schreck, Tobias
2017-01-01
In this work we address the problem of retrieving potentially interesting matrix views to support the exploration of networks. We introduce Matrix Diagnostics (or Magnostics), following in spirit related approaches for rating and ranking other visualization techniques, such as Scagnostics for scatter plots. Our approach ranks matrix views according to the appearance of specific visual patterns, such as blocks and lines, indicating the existence of topological motifs in the data, such as clusters, bi-graphs, or central nodes. Magnostics can be used to analyze, query, or search for visually similar matrices in large collections, or to assess the quality of matrix reordering algorithms. While many feature descriptors for image analyzes exist, there is no evidence how they perform for detecting patterns in matrices. In order to make an informed choice of feature descriptors for matrix diagnostics, we evaluate 30 feature descriptors-27 existing ones and three new descriptors that we designed specifically for MAGNOSTICS-with respect to four criteria: pattern response, pattern variability, pattern sensibility, and pattern discrimination. We conclude with an informed set of six descriptors as most appropriate for Magnostics and demonstrate their application in two scenarios; exploring a large collection of matrices and analyzing temporal networks.
NASA Astrophysics Data System (ADS)
Shia, Wei-Chung; Chen, Dar-Ren; Huang, Yu-Len; Wu, Hwa-Koon; Kuo, Shou-Jen
2015-10-01
The aim of this study was to evaluate the effectiveness of advanced ultrasound (US) imaging of vascular flow and morphological features in the prediction of a pathologic complete response (pCR) and a partial response (PR) to neoadjuvant chemotherapy for T2 breast cancer. Twenty-nine consecutive patients with T2 breast cancer treated with six courses of anthracycline-based neoadjuvant chemotherapy were enrolled. Three-dimensional (3D) power Doppler US with high-definition flow (HDF) technology was used to investigate the blood flow in and morphological features of the tumors. Six vascularity quantization features, three morphological features, and two vascular direction features were selected and extracted from the US images. A support vector machine was used to evaluate the changes in vascularity after neoadjuvant chemotherapy, and pCR and PR were predicted on the basis of these changes. The most accurate prediction of pCR was achieved after the first chemotherapy cycle, with an accuracy of 93.1% and a specificity of 85.5%, while that of a PR was achieved after the second cycle, with an accuracy of 79.31% and a specificity of 72.22%. Vascularity data can be useful to predict the effects of neoadjuvant chemotherapy. Determination of changes in vascularity after neoadjuvant chemotherapy using 3D power Doppler US with HDF can generate accurate predictions of the patient response, facilitating early decision-making.
Prieto, Sandra P.; Lai, Keith K.; Laryea, Jonathan A.; Mizell, Jason S.; Muldoon, Timothy J.
2016-01-01
Abstract. Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893
NASA Technical Reports Server (NTRS)
Sargsyan, Ashot E.; Kramer, Larry A.; Hamilton, Douglas R.; Hamilton, Douglas R.; Fogarty, Jennifer; Polk, J. D.
2010-01-01
Introduction: Intracranial pressure (ICP) elevation has been inferred or documented in a number of space crewmembers. Recent advances in noninvasive imaging technology offer new possibilities for ICP assessment. Most International Space Station (ISS) partner agencies have adopted a battery of occupational health monitoring tests including magnetic resonance imaging (MRI) pre- and postflight, and high-resolution sonography of the orbital structures in all mission phases including during flight. We hypothesize that joint consideration of data from the two techniques has the potential to improve quality and continuity of crewmember monitoring and care. Methods: Specially designed MRI and sonographic protocols were used to image eyes and optic nerves (ON) including the meningeal sheaths. Specific crewmembers multi-modality imaging data were analyzed to identify points of mutual validation as well as unique features of complementary nature. Results and Conclusion: Magnetic resonance imaging (MRI) and high-resolution sonography are both tomographic methods, however images obtained by the two modalities are based on different physical phenomena and use different acquisition principles. Consideration of the images acquired by these two modalities allows cross-validating findings related to the volume and fluid content of the ON subarachnoid space, shape of the globe, and other anatomical features of the orbit. Each of the imaging modalities also has unique advantages, making them complementary techniques.
Shrivastava, Vimal K; Londhe, Narendra D; Sonawane, Rajendra S; Suri, Jasjit S
2016-04-01
Psoriasis is an autoimmune skin disease with red and scaly plaques on skin and affecting about 125 million people worldwide. Currently, dermatologist use visual and haptic methods for diagnosis the disease severity. This does not help them in stratification and risk assessment of the lesion stage and grade. Further, current methods add complexity during monitoring and follow-up phase. The current diagnostic tools lead to subjectivity in decision making and are unreliable and laborious. This paper presents a first comparative performance study of its kind using principal component analysis (PCA) based CADx system for psoriasis risk stratification and image classification utilizing: (i) 11 higher order spectra (HOS) features, (ii) 60 texture features, and (iii) 86 color feature sets and their seven combinations. Aggregate 540 image samples (270 healthy and 270 diseased) from 30 psoriasis patients of Indian ethnic origin are used in our database. Machine learning using PCA is used for dominant feature selection which is then fed to support vector machine classifier (SVM) to obtain optimized performance. Three different protocols are implemented using three kinds of feature sets. Reliability index of the CADx is computed. Among all feature combinations, the CADx system shows optimal performance of 100% accuracy, 100% sensitivity and specificity, when all three sets of feature are combined. Further, our experimental result with increasing data size shows that all feature combinations yield high reliability index throughout the PCA-cutoffs except color feature set and combination of color and texture feature sets. HOS features are powerful in psoriasis disease classification and stratification. Even though, independently, all three set of features HOS, texture, and color perform competitively, but when combined, the machine learning system performs the best. The system is fully automated, reliable and accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Neuroimaging findings in cryptogenic stroke patients with and without patent foramen ovale.
Thaler, David E; Ruthazer, Robin; Di Angelantonio, Emanuele; Di Tullio, Marco R; Donovan, Jennifer S; Elkind, Mitchell S V; Griffith, John; Homma, Shunichi; Jaigobin, Cheryl; Mas, Jean-Louis; Mattle, Heinrich P; Michel, Patrik; Mono, Marie-Luise; Nedeltchev, Krassen; Papetti, Federica; Serena, Joaquín; Weimar, Christian; Kent, David M
2013-03-01
Patent foramen ovale (PFO) and cryptogenic stroke are commonly associated but some PFOs are incidental. Specific radiological findings associated with PFO may be more likely to indicate a PFO-related cause. We examined whether specific radiological findings are associated with PFO among subjects with cryptogenic stroke and known PFO status. We analyzed the Risk of Paradoxical Embolism(RoPE) Study database of subjects with cryptogenic stroke and known PFO status, for associations between PFO and: (1) index stroke seen on imaging, (2) index stroke size, (3) index stroke location, (4) multiple index strokes, and (5) prior stroke on baseline imaging. We also compared imaging with purported high-risk echocardiographic features. Subjects (N=2680) were significantly more likely to have a PFO if their index stroke was large (odds ratio [OR], 1.36; P=0.0025), seen on index imaging (OR, 1.53; P=0.003), and superficially located (OR, 1.54; P<0.0001). A prior stroke on baseline imaging was associated with not having a PFO (OR, 0.66; P<0.0001). Finding multiple index strokes was unrelated to PFO status (OR, 1.21; P=0.161). No echocardiographic variables were related to PFO status. This is the largest study to report the radiological characteristics of patients with cryptogenic stroke and known PFO status. Strokes that were large, radiologically apparent, superficially located, or unassociated with prior radiological infarcts were more likely to be PFO-associated than were unapparent, smaller, or deep strokes, and those accompanied by chronic infarcts. There was no association between PFO and multiple acute strokes nor between specific echocardiographic PFO features with neuroimaging findings.
NASA Astrophysics Data System (ADS)
Hao, Hongxia; Zhou, Zhiguo; Li, Shulong; Maquilan, Genevieve; Folkert, Michael R.; Iyengar, Puneeth; Westover, Kenneth D.; Albuquerque, Kevin; Liu, Fang; Choy, Hak; Timmerman, Robert; Yang, Lin; Wang, Jing
2018-05-01
Distant failure is the main cause of human cancer-related mortalities. To develop a model for predicting distant failure in non-small cell lung cancer (NSCLC) and cervix cancer (CC) patients, a shell feature, consisting of outer voxels around the tumor boundary, was constructed using pre-treatment positron emission tomography (PET) images from 48 NSCLC patients received stereotactic body radiation therapy and 52 CC patients underwent external beam radiation therapy and concurrent chemotherapy followed with high-dose-rate intracavitary brachytherapy. The hypothesis behind this feature is that non-invasive and invasive tumors may have different morphologic patterns in the tumor periphery, in turn reflecting the differences in radiological presentations in the PET images. The utility of the shell was evaluated by the support vector machine classifier in comparison with intensity, geometry, gray level co-occurrence matrix-based texture, neighborhood gray tone difference matrix-based texture, and a combination of these four features. The results were assessed in terms of accuracy, sensitivity, specificity, and AUC. Collectively, the shell feature showed better predictive performance than all the other features for distant failure prediction in both NSCLC and CC cohorts.
Improved Feature Matching for Mobile Devices with IMU.
Masiero, Andrea; Vettore, Antonio
2016-08-05
Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.
Graph-Based Object Class Discovery
NASA Astrophysics Data System (ADS)
Xia, Shengping; Hancock, Edwin R.
We are interested in the problem of discovering the set of object classes present in a database of images using a weakly supervised graph-based framework. Rather than making use of the ”Bag-of-Features (BoF)” approach widely used in current work on object recognition, we represent each image by a graph using a group of selected local invariant features. Using local feature matching and iterative Procrustes alignment, we perform graph matching and compute a similarity measure. Borrowing the idea of query expansion , we develop a similarity propagation based graph clustering (SPGC) method. Using this method class specific clusters of the graphs can be obtained. Such a cluster can be generally represented by using a higher level graph model whose vertices are the clustered graphs, and the edge weights are determined by the pairwise similarity measure. Experiments are performed on a dataset, in which the number of images increases from 1 to 50K and the number of objects increases from 1 to over 500. Some objects have been discovered with total recall and a precision 1 in a single cluster.
Brain tissue analysis using texture features based on optical coherence tomography images
NASA Astrophysics Data System (ADS)
Lenz, Marcel; Krug, Robin; Dillmann, Christopher; Gerhardt, Nils C.; Welp, Hubert; Schmieder, Kirsten; Hofmann, Martin R.
2018-02-01
Brain tissue differentiation is highly demanded in neurosurgeries, i.e. tumor resection. Exact navigation during the surgery is essential in order to guarantee best life quality afterwards. So far, no suitable method has been found that perfectly covers this demands. With optical coherence tomography (OCT), fast three dimensional images can be obtained in vivo and contactless with a resolution of 1-15 μm. With these specifications OCT is a promising tool to support neurosurgeries. Here, we investigate ex vivo samples of meningioma, healthy white and healthy gray matter in a preliminary study towards in vivo brain tumor removal assistance. Raw OCT images already display structural variations for different tissue types, especially meningioma. But, in order to achieve neurosurgical guidance directly during resection, an automated differentiation approach is desired. For this reason, we employ different texture feature based algorithms, perform a Principal Component Analysis afterwards and then train a Support Vector Machine classifier. In the future we will try different combinations of texture features and perform in vivo measurements in order to validate our findings.
NASA Astrophysics Data System (ADS)
Dubey, Kavita; Srivastava, Vishal; Singh Mehta, Dalip
2018-04-01
Early identification of fungal infection on the human scalp is crucial for avoiding hair loss. The diagnosis of fungal infection on the human scalp is based on a visual assessment by trained experts or doctors. Optical coherence tomography (OCT) has the ability to capture fungal infection information from the human scalp with a high resolution. In this study, we present a fully automated, non-contact, non-invasive optical method for rapid detection of fungal infections based on the extracted features from A-line and B-scan images of OCT. A multilevel ensemble machine model is designed to perform automated classification, which shows the superiority of our classifier to the best classifier based on the features extracted from OCT images. In this study, 60 samples (30 fungal, 30 normal) were imaged by OCT and eight features were extracted. The classification algorithm had an average sensitivity, specificity and accuracy of 92.30, 90.90 and 91.66%, respectively, for identifying fungal and normal human scalps. This remarkable classifying ability makes the proposed model readily applicable to classifying the human scalp.
Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Kokaly, Raymond F.; Sutley, Steve J.; Dalton, J. Brad; McDougal, Robert R.; Gent, Carol A.
2003-01-01
Imaging spectroscopy is a tool that can be used to spectrally identify and spatially map materials based on their specific chemical bonds. Spectroscopic analysis requires significantly more sophistication than has been employed in conventional broadband remote sensing analysis. We describe a new system that is effective at material identification and mapping: a set of algorithms within an expert system decision‐making framework that we call Tetracorder. The expertise in the system has been derived from scientific knowledge of spectral identification. The expert system rules are implemented in a decision tree where multiple algorithms are applied to spectral analysis, additional expert rules and algorithms can be applied based on initial results, and more decisions are made until spectral analysis is complete. Because certain spectral features are indicative of specific chemical bonds in materials, the system can accurately identify and map those materials. In this paper we describe the framework of the decision making process used for spectral identification, describe specific spectral feature analysis algorithms, and give examples of what analyses and types of maps are possible with imaging spectroscopy data. We also present the expert system rules that describe which diagnostic spectral features are used in the decision making process for a set of spectra of minerals and other common materials. We demonstrate the applications of Tetracorder to identify and map surface minerals, to detect sources of acid rock drainage, and to map vegetation species, ice, melting snow, water, and water pollution, all with one set of expert system rules. Mineral mapping can aid in geologic mapping and fault detection and can provide a better understanding of weathering, mineralization, hydrothermal alteration, and other geologic processes. Environmental site assessment, such as mapping source areas of acid mine drainage, has resulted in the acceleration of site cleanup, saving millions of dollars and years in cleanup time. Imaging spectroscopy data and Tetracorder analysis can be used to study both terrestrial and planetary science problems. Imaging spectroscopy can be used to probe planetary systems, including their atmospheres, oceans, and land surfaces.
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Workflow opportunities using JPEG 2000
NASA Astrophysics Data System (ADS)
Foshee, Scott
2002-11-01
JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.
Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter
2016-01-01
Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076
Final Cassini RADAR Observation of Titan's Magic Island Region and Ligeia Mare
NASA Astrophysics Data System (ADS)
Hofgartner, J. D.; Hayes, A.; Lunine, J. I.; Stiles, B. W.; Malaska, M. J.; Wall, S. D.
2017-12-01
Cassini arrived in the Saturn system shortly after the Oct. 2002 northern winter solstice and the mission will end shortly after the May 2017 northern summer solstice. A main objective of the Cassini Solstice mission is to study seasonal and temporal changes and at Titan this includes changes of the hydrocarbon lakes/seas. Titan's Magic Islands are transient bright features in the north polar sea, Ligeia Mare that were observed to be temporal changes in Cassini RADAR images. The Magic Islands were discovered in a July 2013 image as anomalously bright features that were not present in four previous observations from Feb. 2007 - May 2013. The region of the Magic Islands was again anomalously bright in an Aug. 2014 image and the total areal extent of the anomalously bright region had increased by more than a factor of three. The transient features were not, however, observed in a Jan. 2015 image. Thus in seven observations spanning much of the Cassini mission the bright features were observed to appear, increase in extent, and then disappear. They are referred to as Titan's Magic Islands because of their appearing/disappearing behavior and resemblance in appearance to islands. These transient bright features are not actually islands. The transients were concluded to be most consistent with waves, floating solids, suspended solids, and bubbles. Tides, sea level changes, and seafloor changes are unlikely to be the primary cause of these temporal changes. Whether these temporal changes are also seasonal changes was unclear. The final Cassini RADAR imaging observation of Titan in Apr. 2017 included the region of the Magic Islands. The transient bright features were not present during this observation. The geometry of the observation was such that, had the transients been present, their brightness may have ruled out some of the remaining hypotheses. Their absence however, is less constraining but consistent with their transient nature. Waves, floating solids, suspended solids, and bubbles remain the most likely hypotheses. Other regions of Ligeia Mare were also imaged in the Apr. 2017 observation and no transient features were observed elsewhere in the sea. The specific process responsible for these transient features and the role of seasonal changes in their appearance and disappearance remains an open research question.
NASA Astrophysics Data System (ADS)
Jaferzadeh, Keyvan; Moon, Inkyu
2016-12-01
The classification of erythrocytes plays an important role in the field of hematological diagnosis, specifically blood disorders. Since the biconcave shape of red blood cell (RBC) is altered during the different stages of hematological disorders, we believe that the three-dimensional (3-D) morphological features of erythrocyte provide better classification results than conventional two-dimensional (2-D) features. Therefore, we introduce a set of 3-D features related to the morphological and chemical properties of RBC profile and try to evaluate the discrimination power of these features against 2-D features with a neural network classifier. The 3-D features include erythrocyte surface area, volume, average cell thickness, sphericity index, sphericity coefficient and functionality factor, MCH and MCHSD, and two newly introduced features extracted from the ring section of RBC at the single-cell level. In contrast, the 2-D features are RBC projected surface area, perimeter, radius, elongation, and projected surface area to perimeter ratio. All features are obtained from images visualized by off-axis digital holographic microscopy with a numerical reconstruction algorithm, and four categories of biconcave (doughnut shape), flat-disc, stomatocyte, and echinospherocyte RBCs are interested. Our experimental results demonstrate that the 3-D features can be more useful in RBC classification than the 2-D features. Finally, we choose the best feature set of the 2-D and 3-D features by sequential forward feature selection technique, which yields better discrimination results. We believe that the final feature set evaluated with a neural network classification strategy can improve the RBC classification accuracy.
Patient-Specific Simulation of Cardiac Blood Flow From High-Resolution Computed Tomography.
Lantz, Jonas; Henriksson, Lilian; Persson, Anders; Karlsson, Matts; Ebbers, Tino
2016-12-01
Cardiac hemodynamics can be computed from medical imaging data, and results could potentially aid in cardiac diagnosis and treatment optimization. However, simulations are often based on simplified geometries, ignoring features such as papillary muscles and trabeculae due to their complex shape, limitations in image acquisitions, and challenges in computational modeling. This severely hampers the use of computational fluid dynamics in clinical practice. The overall aim of this study was to develop a novel numerical framework that incorporated these geometrical features. The model included the left atrium, ventricle, ascending aorta, and heart valves. The framework used image registration to obtain patient-specific wall motion, automatic remeshing to handle topological changes due to the complex trabeculae motion, and a fast interpolation routine to obtain intermediate meshes during the simulations. Velocity fields and residence time were evaluated, and they indicated that papillary muscles and trabeculae strongly interacted with the blood, which could not be observed in a simplified model. The framework resulted in a model with outstanding geometrical detail, demonstrating the feasibility as well as the importance of a framework that is capable of simulating blood flow in physiologically realistic hearts.
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas B.; Jain, Manu; Cordova, Miguel A.; Kose, Kivanc; Rajadhyaksha, Milind; Halpern, Allan C.; Farkas, Daniel L.
2017-02-01
Motivation and background: Melanoma, the fastest growing cancer worldwide, kills more than one person every hour in the United States. Determining the depth and distribution of dermal melanin and hemoglobin adds physio-morphologic information to the current diagnostic standard, cellular morphology, to further develop noninvasive methods to discriminate between melanoma and benign skin conditions. Purpose: To compare the performance of a multimode dermoscopy system (SkinSpect), which is designed to quantify and map in three dimensions, in vivo melanin and hemoglobin in skin, and to validate this with histopathology and three dimensional reflectance confocal microscopy (RCM) imaging. Methods: Sequentially capture SkinSpect and RCM images of suspect lesions and nearby normal skin and compare this with histopathology reports, RCM imaging allows noninvasive observation of nuclear, cellular and structural detail in 1-5 μm-thin optical sections in skin, and detection of pigmented skin lesions with sensitivity of 90-95% and specificity of 70-80%. The multimode imaging dermoscope combines polarization (cross and parallel), autofluorescence and hyperspectral imaging to noninvasively map the distribution of melanin, collagen and hemoglobin oxygenation in pigmented skin lesions. Results: We compared in vivo features of ten melanocytic lesions extracted by SkinSpect and RCM imaging, and correlated them to histopathologic results. We present results of two melanoma cases (in situ and invasive), and compare with in vivo features from eight benign lesions. Melanin distribution at different depths and hemodynamics, including abnormal vascularity, detected by both SkinSpect and RCM will be discussed. Conclusion: Diagnostic features such as dermal melanin and hemoglobin concentration provided in SkinSpect skin analysis for melanoma and normal pigmented lesions can be compared and validated using results from RCM and histopathology.
Araki, Tetsuro; Sholl, Lynette M.; Gerbaudo, Victor H.; Hatabu, Hiroto; Nishino, Mizuki
2014-01-01
OBJECTIVE The purpose of this article is to investigate the imaging characteristics of pathologically proven thymic hyperplasia and to identify features that can differentiate true hyperplasia from lymphoid hyperplasia. MATERIALS AND METHODS Thirty-one patients (nine men and 22 women; age range, 20–68 years) with pathologically confirmed thymic hyperplasia (18 true and 13 lymphoid) who underwent preoperative CT (n = 27), PET/CT (n = 5), or MRI (n = 6) were studied. The length and thickness of each thymic lobe and the transverse and anterior-posterior diameters and attenuation of the thymus were measured on CT. Thymic morphologic features and heterogeneity on CT and chemical shift on MRI were evaluated. Maximum standardized uptake values were measured on PET. Imaging features between true and lymphoid hyperplasia were compared. RESULTS No significant differences were observed between true and lymphoid hyperplasia in terms of thymic length, thickness, diameters, morphologic features, and other qualitative features (p > 0.16). The length, thickness, and diameters of thymic hyperplasia were significantly larger than the mean values of normal glands in the corresponding age group (p < 0.001). CT attenuation of lymphoid hyperplasia was significantly higher than that of true hyperplasia among 15 patients with contrast-enhanced CT (median, 47.9 vs 31.4 HU; Wilcoxon p = 0.03). The receiver operating characteristic analysis yielded greater than 41.2 HU as the optimal threshold for differentiating lymphoid hyperplasia from true hyperplasia, with 83% sensitivity and 89% specificity. A decrease of signal intensity on opposed-phase images was present in all four cases with in- and opposed-phase imaging. The mean maximum standardized uptake value was 2.66. CONCLUSION CT attenuation of the thymus was significantly higher in lymphoid hyperplasia than in true hyperplasia, with an optimal threshold of greater than 41.2 HU in this cohort of patients with pathologically confirmed thymic hyperplasia. PMID:24555583
Multi-Stage System for Automatic Target Recognition
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Lu, Thomas T.; Ye, David; Edens, Weston; Johnson, Oliver
2010-01-01
A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feedforward back-propagation neural network (NN) is then trained to classify each feature vector and to remove false positives. The system parameter optimizations process has been developed to adapt to various targets and datasets. The objective was to design an efficient computer vision system that can learn to detect multiple targets in large images with unknown backgrounds. Because the target size is small relative to the image size in this problem, there are many regions of the image that could potentially contain the target. A cursory analysis of every region can be computationally efficient, but may yield too many false positives. On the other hand, a detailed analysis of every region can yield better results, but may be computationally inefficient. The multi-stage ATR system was designed to achieve an optimal balance between accuracy and computational efficiency by incorporating both models. The detection stage first identifies potential ROIs where the target may be present by performing a fast Fourier domain OT-MACH filter-based correlation. Because threshold for this stage is chosen with the goal of detecting all true positives, a number of false positives are also detected as ROIs. The verification stage then transforms the regions of interest into feature space, and eliminates false positives using an artificial neural network classifier. The multi-stage system allows tuning the detection sensitivity and the identification specificity individually in each stage. It is easier to achieve optimized ATR operation based on its specific goal. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar and video image datasets.
Yu, Huan; Caldwell, Curtis; Mah, Katherine; Mozeg, Daniel
2009-03-01
Coregistered fluoro-deoxy-glucose (FDG) positron emission tomography/computed tomography (PET/CT) has shown potential to improve the accuracy of radiation targeting of head and neck cancer (HNC) when compared to the use of CT simulation alone. The objective of this study was to identify textural features useful in distinguishing tumor from normal tissue in head and neck via quantitative texture analysis of coregistered 18F-FDG PET and CT images. Abnormal and typical normal tissues were manually segmented from PET/CT images of 20 patients with HNC and 20 patients with lung cancer. Texture features including some derived from spatial grey-level dependence matrices (SGLDM) and neighborhood gray-tone-difference matrices (NGTDM) were selected for characterization of these segmented regions of interest (ROIs). Both K nearest neighbors (KNNs) and decision tree (DT)-based KNN classifiers were employed to discriminate images of abnormal and normal tissues. The area under the curve (AZ) of receiver operating characteristics (ROC) was used to evaluate the discrimination performance of features in comparison to an expert observer. The leave-one-out and bootstrap techniques were used to validate the results. The AZ of DT-based KNN classifier was 0.95. Sensitivity and specificity for normal and abnormal tissue classification were 89% and 99%, respectively. In summary, NGTDM features such as PET Coarseness, PET Contrast, and CT Coarseness extracted from FDG PET/CT images provided good discrimination performance. The clinical use of such features may lead to improvement in the accuracy of radiation targeting of HNC.
Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns
Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang
2014-01-01
Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723
Monocular precrash vehicle detection: features and classifiers.
Sun, Zehang; Bebis, George; Miller, Ronald
2006-07-01
Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.
Computer-aided diagnosis with textural features for breast lesions in sonograms.
Chen, Dar-Ren; Huang, Yu-Len; Lin, Sheng-Hsiung
2011-04-01
Computer-aided diagnosis (CAD) systems provided second beneficial support reference and enhance the diagnostic accuracy. This paper was aimed to develop and evaluate a CAD with texture analysis in the classification of breast tumors for ultrasound images. The ultrasound (US) dataset evaluated in this study composed of 1020 sonograms of region of interest (ROI) subimages from 255 patients. Two-view sonogram (longitudinal and transverse views) and four different rectangular regions were utilized to analyze each tumor. Six practical textural features from the US images were performed to classify breast tumors as benign or malignant. However, the textural features always perform as a high dimensional vector; high dimensional vector is unfavorable to differentiate breast tumors in practice. The principal component analysis (PCA) was used to reduce the dimension of textural feature vector and then the image retrieval technique was performed to differentiate between benign and malignant tumors. In the experiments, all the cases were sampled with k-fold cross-validation (k=10) to evaluate the performance with receiver operating characteristic (ROC) curve. The area (A(Z)) under the ROC curve for the proposed CAD system with the specific textural features was 0.925±0.019. The classification ability for breast tumor with textural information is satisfactory. This system differentiates benign from malignant breast tumors with a good result and is therefore clinically useful to provide a second opinion. Copyright © 2010 Elsevier Ltd. All rights reserved.
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
MRI of lower extremity impingement and friction syndromes in children
Aydıngöz, Üstün; Özdemir, Zeynep Maraş; Güneş, Altan; Ergen, Fatma Bilge
2016-01-01
Although generally more common in adults, lower extremity impingement and friction syndromes are also observed in the pediatric age group. Encompassing femoroacetabular impingement, iliopsoas impingement, subspine impingement, and ischiofemoral impingement around the hip; patellar tendon–lateral femoral condyle friction syndrome; iliotibial band friction syndrome; and medial synovial plica syndrome in the knee as well as talocalcaneal impingement on the hindfoot, these syndromes frequently cause pain and may mimic other, and occasionally more ominous, conditions in children. Magnetic resonance imaging (MRI) plays a key role in the diagnosis of musculoskeletal impingement and friction syndromes. Iliopsoas, subspine, and ischiofemoral impingements have been recently described, while some features of femoroacetabular and talocalcaneal impingements have recently gained increased relevance in the pediatric population. Fellowship-trained pediatric radiologists and radiologists with imaging workloads of exclusively or overwhelmingly pediatric patients (particularly those without a structured musculoskeletal imaging program as part of their imaging training) specifically need to be aware of these rare syndromes that mostly have quite characteristic imaging findings. This review highlights MRI features of lower extremity impingement and friction syndromes in children and provides updated pertinent pathophysiologic and clinical data. PMID:27538047
Using Optical Coherence Tomography to Evaluate Skin Sun Damage and Precancer
Korde, Vrushali R.; Bonnema, Garret T.; Xu, Wei; Krishnamurthy, Chetankumar; Ranger-Moore, James; Saboda, Kathylynn; Slayton, Lisa D.; Salasche, Stuart J.; Warneke, James A.; Alberts, David S.; Barton, Jennifer K.
2008-01-01
Background and Objectives Optical coherence tomography (OCT) is a depth resolved imaging modality that may aid in identifying sun damaged skin and the precancerous condition actinic keratosis (AK). Study Design/Materials and Methods OCT images were acquired of 112 patients at 2 sun protected and 2 sun exposed sites, with a subsequent biopsy. Each site received a dermatological evaluation, a histological diagnosis, and a solar elastosis (SE) score. OCT images were examined visually and statistically analyzed. Results Characteristic OCT image features were identified of sun protected, undiseased, sun damaged, and AK skin. A statistically significant difference (P < 0.0001) between the average attenuation values of skin with minimal and severe solar elastosis was observed. Significant differences (P < 0.0001) were also found between undiseased skin and AK using a gradient analysis. Using image features, AK could be distinguished from undiseased skin with 86% sensitivity and 83% specificity. Conclusion OCT has the potential to guide biopsies and provide non-invasive measures of skin sun damage and disease state, possibly increasing efficiency of chemopreventive agent trials. PMID:17960754
Classification of document page images based on visual similarity of layout structures
NASA Astrophysics Data System (ADS)
Shin, Christian K.; Doermann, David S.
1999-12-01
Searching for documents by their type or genre is a natural way to enhance the effectiveness of document retrieval. The layout of a document contains a significant amount of information that can be used to classify a document's type in the absence of domain specific models. A document type or genre can be defined by the user based primarily on layout structure. Our classification approach is based on 'visual similarity' of the layout structure by building a supervised classifier, given examples of the class. We use image features, such as the percentages of tex and non-text (graphics, image, table, and ruling) content regions, column structures, variations in the point size of fonts, the density of content area, and various statistics on features of connected components which can be derived from class samples without class knowledge. In order to obtain class labels for training samples, we conducted a user relevance test where subjects ranked UW-I document images with respect to the 12 representative images. We implemented our classification scheme using the OC1, a decision tree classifier, and report our findings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua, C.
This session will update therapeutic physicists on technological advancements and radiation oncology features of commercial CT, MRI, and PET/CT imaging systems. Also described are physicists’ roles in every stage of equipment selection, purchasing, and operation, including defining specifications, evaluating vendors, making recommendations, and optimal and safe use of imaging equipment in radiation oncology environment. The first presentation defines important terminology of CT and PET/CT followed by a review of latest innovations, such as metal artifact reduction, statistical iterative reconstruction, radiation dose management, tissue classification by dual energy CT and spectral CT, improvement in spatial resolution and sensitivity in PET, andmore » potentials of PET/MR. We will also discuss important technical specifications and items in CT and PET/CT purchasing quotes and their impacts. The second presentation will focus on key components in the request for proposal for a MRI simulator and how to evaluate vendor proposals. MRI safety issues in radiation Oncology, including MRI scanner Zones (4-zone design), will be discussed. Basic MR terminologies, important functionalities, and advanced features, which are relevant to radiation therapy, will be discussed. In the third presentation, justification of imaging systems for radiation oncology, considerations in room design and construction in a RO department, shared use with diagnostic radiology, staffing needs and training, clinical/research use cases and implementation, will be discussed. The emphasis will be on understanding and bridging the differences between diagnostic and radiation oncology installations, building consensus amongst stakeholders for purchase and use, and integrating imaging technologies into the radiation oncology environment. Learning Objectives: Learn the latest innovations of major imaging systems relevant to radiation therapy Be able to describe important technical specifications of CT, MRI, and PET/CT Understand the process of budget request, equipment justification, comparisons of technical specifications, site visits, vendor selection, and contract development.« less
TU-G-201-02: An MRI Simulator From Proposal to Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Y.
2015-06-15
This session will update therapeutic physicists on technological advancements and radiation oncology features of commercial CT, MRI, and PET/CT imaging systems. Also described are physicists’ roles in every stage of equipment selection, purchasing, and operation, including defining specifications, evaluating vendors, making recommendations, and optimal and safe use of imaging equipment in radiation oncology environment. The first presentation defines important terminology of CT and PET/CT followed by a review of latest innovations, such as metal artifact reduction, statistical iterative reconstruction, radiation dose management, tissue classification by dual energy CT and spectral CT, improvement in spatial resolution and sensitivity in PET, andmore » potentials of PET/MR. We will also discuss important technical specifications and items in CT and PET/CT purchasing quotes and their impacts. The second presentation will focus on key components in the request for proposal for a MRI simulator and how to evaluate vendor proposals. MRI safety issues in radiation Oncology, including MRI scanner Zones (4-zone design), will be discussed. Basic MR terminologies, important functionalities, and advanced features, which are relevant to radiation therapy, will be discussed. In the third presentation, justification of imaging systems for radiation oncology, considerations in room design and construction in a RO department, shared use with diagnostic radiology, staffing needs and training, clinical/research use cases and implementation, will be discussed. The emphasis will be on understanding and bridging the differences between diagnostic and radiation oncology installations, building consensus amongst stakeholders for purchase and use, and integrating imaging technologies into the radiation oncology environment. Learning Objectives: Learn the latest innovations of major imaging systems relevant to radiation therapy Be able to describe important technical specifications of CT, MRI, and PET/CT Understand the process of budget request, equipment justification, comparisons of technical specifications, site visits, vendor selection, and contract development.« less
Sandhu, Simrenjeet; Rudnisky, Chris; Arora, Sourabh; Kassam, Faazil; Douglas, Gordon; Edwards, Marianne C; Verstraten, Karin; Wong, Beatrice; Damji, Karim F
2018-03-01
Clinicians can feel confident compressed three-dimensional digital (3DD) and two-dimensional digital (2DD) imaging evaluating important features of glaucomatous disc damage is comparable to the previous gold standard of stereoscopic slide film photography, supporting the use of digital imaging for teleglaucoma applications. To compare the sensitivity and specificity of 3DD and 2DD photography with stereo slide film in detecting glaucomatous optic nerve head features. This prospective, multireader validation study imaged and compressed glaucomatous, suspicious or normal optic nerves using a ratio of 16:1 into 3DD and 2DD (1024×1280 pixels) and compared both to stereo slide film. The primary outcome was vertical cup-to-disc ratio (VCDR) and secondary outcomes, including disc haemorrhage and notching, were also evaluated. Each format was graded randomly by four glaucoma specialists. A protocol was implemented for harmonising data including consensus-based interpretation as needed. There were 192 eyes imaged with each format. The mean VCDR for slide, 3DD and 2DD was 0.59±0.20, 0.60±0.18 and 0.62±0.17, respectively. The agreement of VCDR for 3DD versus film was κ=0.781 and for 2DD versus film was κ=0.69. Sensitivity (95.2%), specificity (95.2%) and area under the curve (AUC; 0.953) of 3DD imaging to detect notching were better (p=0.03) than for 2DD (90.5%; 88.6%; AUC=0.895). Similarly, sensitivity (77.8%), specificity (98.9%) and AUC (0.883) of 3DD to detect disc haemorrhage were better (p=0.049) than for 2DD (44.4%; 99.5%; AUC=0.72). There was no difference between 3DD and 2DD imaging in detecting disc tilt (p=0.7), peripapillary atrophy (p=0.16), grey crescent (p=0.1) or pallor (p=0.43), although 3D detected sloping better (p=0.013). Both 3DD and 2DD imaging demonstrates excellent reproducibility in comparison to stereo slide film with experts evaluating VCDR, notching and disc haemorrhage. 3DD in this study was slightly more accurate than 2DD for evaluating disc haemorrhage, notching and sloping. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.
Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan
2017-12-01
A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.
Multiscale Analysis of Solar Image Data
NASA Astrophysics Data System (ADS)
Young, C. A.; Myers, D. C.
2001-12-01
It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.
Geospatial Analysis | Energy Analysis | NREL
products and tools. Image of a triangle divided into sections called Market, Economic, Technical, and Featured Study U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis summarizes the achievable energy generation, or technical potential, of specific renewable energy technologies given system
Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.
2016-02-01
Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.
Emerging Technology Update Intravascular Photoacoustic Imaging of Vulnerable Atherosclerotic Plaque.
Wu, Min; Fw van der Steen, Antonius; Regar, Evelyn; van Soest, Gijs
2016-10-01
The identification of vulnerable atherosclerotic plaques in the coronary arteries is emerging as an important tool for guiding atherosclerosis diagnosis and interventions. Assessment of plaque vulnerability requires knowledge of both the structure and composition of the plaque. Intravascular photoacoustic (IVPA) imaging is able to show the morphology and composition of atherosclerotic plaque. With imminent improvements in IVPA imaging, it is becoming possible to assess human coronary artery disease in vivo . Although some challenges remain, IVPA imaging is on its way to being a powerful tool for visualising coronary atherosclerotic features that have been specifically associated with plaque vulnerability and clinical syndromes, and thus such imaging might become valuable for clinical risk assessment in the catheterisation laboratory.