Sample records for origin imaging features

  1. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators

    PubMed Central

    Bai, Xiangzhi

    2015-01-01

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229

  2. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.

    PubMed

    Bai, Xiangzhi

    2015-07-15

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.

  3. Texture Analysis and Cartographic Feature Extraction.

    DTIC Science & Technology

    1985-01-01

    Investigations into using various image descriptors as well as developing interactive feature extraction software on the Digital Image Analysis Laboratory...system. Originator-supplied keywords: Ad-Hoc image descriptor; Bayes classifier; Bhattachryya distance; Clustering; Digital Image Analysis Laboratory

  4. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  5. Improved parallel image reconstruction using feature refinement.

    PubMed

    Cheng, Jing; Jia, Sen; Ying, Leslie; Liu, Yuanyuan; Wang, Shanshan; Zhu, Yanjie; Li, Ye; Zou, Chao; Liu, Xin; Liang, Dong

    2018-07-01

    The aim of this study was to develop a novel feature refinement MR reconstruction method from highly undersampled multichannel acquisitions for improving the image quality and preserve more detail information. The feature refinement technique, which uses a feature descriptor to pick up useful features from residual image discarded by sparsity constrains, is applied to preserve the details of the image in compressed sensing and parallel imaging in MRI (CS-pMRI). The texture descriptor and structure descriptor recognizing different types of features are required for forming the feature descriptor. Feasibility of the feature refinement was validated using three different multicoil reconstruction methods on in vivo data. Experimental results show that reconstruction methods with feature refinement improve the quality of reconstructed image and restore the image details more accurately than the original methods, which is also verified by the lower values of the root mean square error and high frequency error norm. A simple and effective way to preserve more useful detailed information in CS-pMRI is proposed. This technique can effectively improve the reconstruction quality and has superior performance in terms of detail preservation compared with the original version without feature refinement. Magn Reson Med 80:211-223, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    NASA Astrophysics Data System (ADS)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  7. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  8. TU-F-CAMPUS-J-05: Effect of Uncorrelated Noise Texture On Computed Tomography Quantitative Image Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, J; Budzevich, M; Moros, E

    Purpose: To investigate the relationship between quantitative image features (i.e. radiomics) and statistical fluctuations (i.e. electronic noise) in clinical Computed Tomography (CT) using the standardized American College of Radiology (ACR) CT accreditation phantom and patient images. Methods: Three levels of uncorrelated Gaussian noise were added to CT images of phantom and patients (20) acquired in static mode and respiratory tracking mode. We calculated the noise-power spectrum (NPS) of the original CT images of the phantom, and of the phantom images with added Gaussian noise with means of 50, 80, and 120 HU. Concurrently, on patient images (original and noise-added images),more » image features were calculated: 14 shape, 19 intensity (1st order statistics from intensity volume histograms), 18 GLCM features (2nd order statistics from grey level co-occurrence matrices) and 11 RLM features (2nd order statistics from run-length matrices). These features provide the underlying structural information of the images. GLCM (size 128x128) was calculated with a step size of 1 voxel in 13 directions and averaged. RLM feature calculation was performed in 13 directions with grey levels binning into 128 levels. Results: Adding the electronic noise to the images modified the quality of the NPS, shifting the noise from mostly correlated to mostly uncorrelated voxels. The dramatic increase in noise texture did not affect image structure/contours significantly for patient images. However, it did affect the image features and textures significantly as demonstrated by GLCM differences. Conclusion: Image features are sensitive to acquisition factors (simulated by adding uncorrelated Gaussian noise). We speculate that image features will be more difficult to detect in the presence of electronic noise (an uncorrelated noise contributor) or, for that matter, any other highly correlated image noise. This work focuses on the effect of electronic, uncorrelated, noise and future work shall examine the influence of changes in quantum noise on the features. J. Oliver was supported by NSF FGLSAMP BD award HRD #1139850 and the McKnight Doctoral Fellowship.« less

  9. Modified-BRISQUE as no reference image quality assessment for structural MR images.

    PubMed

    Chow, Li Sze; Rajagopal, Heshalini

    2017-11-01

    An effective and practical Image Quality Assessment (IQA) model is needed to assess the image quality produced from any new hardware or software in MRI. A highly competitive No Reference - IQA (NR - IQA) model called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) initially designed for natural images were modified to evaluate structural MR images. The BRISQUE model measures the image quality by using the locally normalized luminance coefficients, which were used to calculate the image features. The modified-BRISQUE model trained a new regression model using MR image features and Difference Mean Opinion Score (DMOS) from 775 MR images. Two types of benchmarks: objective and subjective assessments were used as performance evaluators for both original and modified-BRISQUE models. There was a high correlation between the modified-BRISQUE with both benchmarks, and they were higher than those for the original BRISQUE. There was a significant percentage improvement in their correlation values. The modified-BRISQUE was statistically better than the original BRISQUE. The modified-BRISQUE model can accurately measure the image quality of MR images. It is a practical NR-IQA model for MR images without using reference images. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    PubMed

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  11. Pseudo CT estimation from MRI using patch-based random forest

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian

    2017-02-01

    Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.

  12. Enhancement of morphological and vascular features in OCT images using a modified Bayesian residual transform

    PubMed Central

    Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2018-01-01

    A novel image processing algorithm based on a modified Bayesian residual transform (MBRT) was developed for the enhancement of morphological and vascular features in optical coherence tomography (OCT) and OCT angiography (OCTA) images. The MBRT algorithm decomposes the original OCT image into multiple residual images, where each image presents information at a unique scale. Scale selective residual adaptation is used subsequently to enhance morphological features of interest, such as blood vessels and tissue layers, and to suppress irrelevant image features such as noise and motion artefacts. The performance of the proposed MBRT algorithm was tested on a series of cross-sectional and enface OCT and OCTA images of retina and brain tissue that were acquired in-vivo. Results show that the MBRT reduces speckle noise and motion-related imaging artefacts locally, thus improving significantly the contrast and visibility of morphological features in the OCT and OCTA images. PMID:29760996

  13. A standardised protocol for texture feature analysis of endoscopic images in gynaecological cancer.

    PubMed

    Neofytou, Marios S; Tanos, Vasilis; Pattichis, Marios S; Pattichis, Constantinos S; Kyriacou, Efthyvoulos C; Koutsouris, Dimitris D

    2007-11-29

    In the development of tissue classification methods, classifiers rely on significant differences between texture features extracted from normal and abnormal regions. Yet, significant differences can arise due to variations in the image acquisition method. For endoscopic imaging of the endometrium, we propose a standardized image acquisition protocol to eliminate significant statistical differences due to variations in: (i) the distance from the tissue (panoramic vs close up), (ii) difference in viewing angles and (iii) color correction. We investigate texture feature variability for a variety of targets encountered in clinical endoscopy. All images were captured at clinically optimum illumination and focus using 720 x 576 pixels and 24 bits color for: (i) a variety of testing targets from a color palette with a known color distribution, (ii) different viewing angles, (iv) two different distances from a calf endometrial and from a chicken cavity. Also, human images from the endometrium were captured and analysed. For texture feature analysis, three different sets were considered: (i) Statistical Features (SF), (ii) Spatial Gray Level Dependence Matrices (SGLDM), and (iii) Gray Level Difference Statistics (GLDS). All images were gamma corrected and the extracted texture feature values were compared against the texture feature values extracted from the uncorrected images. Statistical tests were applied to compare images from different viewing conditions so as to determine any significant differences. For the proposed acquisition procedure, results indicate that there is no significant difference in texture features between the panoramic and close up views and between angles. For a calibrated target image, gamma correction provided an acquired image that was a significantly better approximation to the original target image. In turn, this implies that the texture features extracted from the corrected images provided for better approximations to the original images. Within the proposed protocol, for human ROIs, we have found that there is a large number of texture features that showed significant differences between normal and abnormal endometrium. This study provides a standardized protocol for avoiding any significant texture feature differences that may arise due to variability in the acquisition procedure or the lack of color correction. After applying the protocol, we have found that significant differences in texture features will only be due to the fact that the features were extracted from different types of tissue (normal vs abnormal).

  14. An improved feature extraction algorithm based on KAZE for multi-spectral image

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Li, Jun

    2018-02-01

    Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.

  15. Identification of important image features for pork and turkey ham classification using colour and wavelet texture features and genetic selection.

    PubMed

    Jackman, Patrick; Sun, Da-Wen; Allen, Paul; Valous, Nektarios A; Mendoza, Fernando; Ward, Paddy

    2010-04-01

    A method to discriminate between various grades of pork and turkey ham was developed using colour and wavelet texture features. Image analysis methods originally developed for predicting the palatability of beef were applied to rapidly identify the ham grade. With high quality digital images of 50-94 slices per ham it was possible to identify the greyscale that best expressed the differences between the various ham grades. The best 10 discriminating image features were then found with a genetic algorithm. Using the best 10 image features, simple linear discriminant analysis models produced 100% correct classifications for both pork and turkey on both calibration and validation sets. 2009 Elsevier Ltd. All rights reserved.

  16. A New Approach to Automated Labeling of Internal Features of Hardwood Logs Using CT Images

    Treesearch

    Daniel L. Schmoldt; Pei Li; A. Lynn Abbott

    1996-01-01

    The feasibility of automatically identifying internal features of hardwood logs using CT imagery has been established previously. Features of primary interest are bark, knots, voids, decay, and clear wood. Our previous approach: filtered original CT images, applied histogram segmentation, grew volumes to extract 3-d regions, and applied a rule base, with Dempster-...

  17. Noise-gating to Clean Astrophysical Image Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, C. E.

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less

  18. Noise-gating to Clean Astrophysical Image Data

    NASA Astrophysics Data System (ADS)

    DeForest, C. E.

    2017-04-01

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.

  19. Quantitative evaluation of high-resolution features in images of negatively stained Tobacco Mosaic Virus.

    PubMed

    Chang, C F; Williams, R C; Grano, D A; Downing, K H; Glaeser, R M

    1983-01-01

    This study investigates the causes of the apparent differences between the optical diffraction pattern of a micrograph of a Tobacco Mosaic Virus (TMV) particle, the optical diffraction pattern of a ten-fold photographically averaged image, and the computed diffraction pattern of the original micrograph. Peak intensities along the layer lines in the transform of the averaged image appear to be quite unlike those in the diffraction pattern of the original micrograph, and the diffraction intensities for the averaged image extend to unexpectedly high resolution. A carefully controlled, quantitative comparison reveals, however, that the optical diffraction pattern of the original micrograph and that of the ten-fold averaged image are essentially equivalent. Using computer-based image processing, we discovered that the peak intensities on the 6th layer line have values very similar in magnitude to the neighboring noise, in contrast to what was expected from the optical diffraction pattern of the original micrograph. This discrepancy was resolved by recording a series of optical diffraction patterns when the original micrograph was immersed in oil. These patterns revealed the presence of a substantial phase grating effect, which exaggerated the peak intensities on the 6th layer line, causing an erroneous impression that the high resolution features possessed a good signal-to-noise ratio. This study thus reveals some pitfalls and misleading results that can be encountered when using optical diffraction patterns to evaluate image quality.

  20. Robust and efficient method for matching features in omnidirectional images

    NASA Astrophysics Data System (ADS)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  1. Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition

    PubMed Central

    Wang, Xin; Deng, Zhongliang

    2017-01-01

    In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. PMID:28677635

  2. Radiomic biomarkers from PET/CT multi-modality fusion images for the prediction of immunotherapy response in advanced non-small cell lung cancer patients

    NASA Astrophysics Data System (ADS)

    Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James

    2018-02-01

    Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.

  3. A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    PubMed Central

    Liu, Fuxian

    2018-01-01

    One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722

  4. A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.

    PubMed

    Yu, Yunlong; Liu, Fuxian

    2018-01-01

    One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.

  5. The Influence of Changes in Size and Proportion of Selected Facial Features (Eyes, Nose, Mouth) on Assessment of Similarity between Female Faces.

    PubMed

    Lewandowski, Zdzisław

    2015-09-01

    The project aimed at finding the answers to the following two questions: to what extent does a change in size, height or width of the selected facial features influence the assessment of likeness between an original female composite portrait and a modified one? And how does the sex of the person who judges the images have an impact on the perception of likeness of facial features? The first stage of the project consisted of creating the image of the averaged female faces. Then the basic facial features like eyes, nose and mouth were cut out of the averaged face and each of these features was transformed in three ways: its size was changed by reduction or enlargement, its height was modified through reduction or enlargement of the above-mentioned features and its width was altered through widening or narrowing. In each out of six feature alternation methods, intensity of modification reached up to 20% of the original size with changes every 2%. The features altered in such a way were again stuck onto the original faces and retouched. The third stage consisted of the assessment, performed by the judges of both sexes, of the extent of likeness between the averaged composite portrait (without any changes) and the modified portraits. The results indicate that there are significant differences in the assessment of likeness of the portraits with some features modified to the original ones. The images with changes in the size and height of the nose received the lowest scores on the likeness scale, which indicates that these changes were perceived by the subjects as the most important. The photos with changes in the height of lip vermillion thickness (the lip height), lip width and the height and width of eye slit, in turn, received high scores of likeness, in spite of big changes, which signifies that these modifications were perceived as less important when compared to the other features investigated.

  6. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  7. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  8. Knowledge Driven Image Mining with Mixture Density Mercer Kernals

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  9. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    PubMed

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2018-01-01

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  10. Upper ocean fine-scale features in synthetic aperture radar imagery. Part I: Simultaneous satellite and in-situ measurements

    NASA Astrophysics Data System (ADS)

    Soloviev, A.; Maingot, C.; Matt, S.; Fenton, J.; Lehner, S.; Brusch, S.; Perrie, W. A.; Zhang, B.

    2011-12-01

    The new generation of synthetic aperture radar (SAR) satellites provides high resolution images that open new opportunities for identifying and studying fine features in the upper ocean. The problem is, however, that SAR images of the sea surface can be affected by atmospheric phenomena (rain cells, fronts, internal waves, etc.). Implementation of in-situ techniques in conjunction with SAR is instrumental for discerning the origin of features on the image. This work is aimed at the interpretation of natural and artificial features in SAR images. These features can include fresh water lenses, sharp frontal interfaces, internal wave signatures, as well as slicks of artificial and natural origin. We have conducted field experiments in the summer of 2008 and 2010 and in the spring of 2011 to collect in-situ measurements coordinated with overpasses of the TerraSAR-X, RADARSAT-2, ALOS PALSAR, and COSMO SkyMed satellites. The in-situ sensors deployed in the Straits of Florida included a vessel-mounted sonar and CTD system to record near-surface data on stratification and frontal boundaries, a bottom-mounted Nortek AWAC system to gather information on currents and directional wave spectra, an ADCP mooring at a 240 m isobath, and a meteorological station. A nearby NOAA NEXRAD Doppler radar station provided a record of rainfall in the area. Controlled releases of menhaden fish oil were performed from our vessel before several satellite overpasses in order to evaluate the effect of surface active materials on visibility of sea surface features in SAR imagery under different wind-wave conditions. We found evidence in the satellite images of rain cells, squall lines, internal waves of atmospheric and possibly oceanic origin, oceanic frontal interfaces and submesoscale eddies, as well as anthropogenic signatures of ships and their wakes, and near-shore surface slicks. The combination of satellite imagery and coordinated in-situ measurements was helpful in interpreting fine-scale features on the sea surface observed in the SAR images and, in some cases, linking them to thermohaline features in the upper ocean. Finally, we have been able to reproduce SAR signatures of freshwater plumes and sharp frontal interfaces interacting with wind stress, as well as internal waves by combining hydrodynamic simulations with a radar imaging algorithm. The modeling results are presented in a companion paper (Matt et al., 2011).

  11. Self-recovery reversible image watermarking algorithm

    PubMed Central

    Sun, He; Gao, Shangbing; Jin, Shenghua

    2018-01-01

    The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528

  12. Anomalous Aortic Origin of Coronary Arteries in the Young: Echocardiographic Evaluation With Surgical Correlation.

    PubMed

    Lorber, Richard; Srivastava, Shubhika; Wilder, Travis J; McIntyre, Susan; DeCampli, William M; Williams, William G; Frommelt, Peter C; Parness, Ira A; Blackstone, Eugene H; Jacobs, Marshall L; Mertens, Luc; Brothers, Julie A; Herlong, J René

    2015-11-01

    This study sought to compare findings from institutional echocardiographic reports with imaging core laboratory (ICL) review of corresponding echocardiographic images and operative reports in 159 patients with anomalous aortic origin of a coronary artery (AAOCA). The study also sought to develop a "best practice" protocol for imaging and interpreting images in establishing the diagnosis of AAOCA. AAOCA is associated with sudden death in the young. Underlying anatomic risk factors that can cause ischemia-related events include coronary arterial ostial stenosis, intramural course of the proximal coronary within the aortic wall, interarterial course, and potential compression between the great arteries. Consistent protocols for diagnosing and evaluating these features are lacking, potentially precluding the ability to risk stratify patients based on evidence and plan surgical strategy. For a prescribed set of anatomic AAOCA features, percentages of missing data in institutional echocardiographic reports were calculated. For each feature, agreement among institutional echocardiographic reports, ICL review of images, and surgical reports was evaluated using the weighted kappa statistic. An echocardiographic imaging protocol was developed heuristically to reduce differences between institutional reports and ICL review. A total of 13%, 33%, and 62% of echocardiograms were missing images enabling diagnosis of intra-arterial course, proximal intramural course, and high ostial takeoff, respectively. There was poor agreement between institutional reports and ICL review for diagnosis of origin of coronary artery, interarterial course, intramural course, and acute angle takeoff (kappa = 0.74, 0.11, -0.03, 0.13, respectively). Surgical findings were also significantly different from those of reports, and to a lesser extent ICL reviews. The resulting protocol contains technical recommendations for imaging each of these features. Poor agreement between institutional reports and ICL review for AAOCA suggests need for an imaging protocol to permit evidence-based risk stratification and surgical planning. Even then, delineation of echocardiographic details in AAOCA will remain imperfect. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  13. Grouping of optic flow stimuli during binocular rivalry is driven by monocular information.

    PubMed

    Holten, Vivian; Stuit, Sjoerd M; Verstraten, Frans A J; van der Smagt, Maarten J

    2016-10-01

    During binocular rivalry, perception alternates between two dissimilar images, presented dichoptically. Although binocular rivalry is thought to result from competition at a local level, neighboring image parts with similar features tend to be perceived together for longer durations than image parts with dissimilar features. This simultaneous dominance of two image parts is called grouping during rivalry. Previous studies have shown that this grouping depends on a shared eye-of-origin to a much larger extent than on image content, irrespective of the complexity of a static image. In the current study, we examine whether grouping of dynamic optic flow patterns is also primarily driven by monocular (eye-of-origin) information. In addition, we examine whether image parameters, such as optic flow direction, and partial versus full visibility of the optic flow pattern, affect grouping durations during rivalry. The results show that grouping of optic flow is, as is known for static images, primarily affected by its eye-of-origin. Furthermore, global motion can affect grouping durations, but only under specific conditions. Namely, only when the two full optic flow patterns were presented locally. These results suggest that grouping during rivalry is primarily driven by monocular information even for motion stimuli thought to rely on higher-level motion areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.

  15. Efficient image enhancement using sparse source separation in the Retinex theory

    NASA Astrophysics Data System (ADS)

    Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik

    2017-11-01

    Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.

  16. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks.

    PubMed

    Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L

    2016-07-01

    Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text

  17. Writer identification on historical Glagolitic documents

    NASA Astrophysics Data System (ADS)

    Fiel, Stefan; Hollaus, Fabian; Gau, Melanie; Sablatnig, Robert

    2013-12-01

    This work aims at automatically identifying scribes of historical Slavonic manuscripts. The quality of the ancient documents is partially degraded by faded-out ink or varying background. The writer identification method used is based on image features, which are described with Scale Invariant Feature Transform (SIFT) features. A visual vocabulary is used for the description of handwriting characteristics, whereby the features are clustered using a Gaussian Mixture Model and employing the Fisher kernel. The writer identification approach is originally designed for grayscale images of modern handwritings. But contrary to modern documents, the historical manuscripts are partially corrupted by background clutter and water stains. As a result, SIFT features are also found on the background. Since the method shows also good results on binarized images of modern handwritings, the approach was additionally applied on binarized images of the ancient writings. Experiments show that this preprocessing step leads to a significant performance increase: The identification rate on binarized images is 98.9%, compared to an identification rate of 87.6% gained on grayscale images.

  18. Enhancing facial features by using clear facial features

    NASA Astrophysics Data System (ADS)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  19. Morphology of Some Small Mars North-Polar Volcanic Edifices from Viking Images and MOLA Topography

    NASA Technical Reports Server (NTRS)

    Wright, H. M.; Sakimoto, S. E. H.; Garvin, J. B.

    2000-01-01

    Studied features in the northern near polar regions of Mars have morphologies suggesting volcanic origin. The results of this study suggest that these features may represent martian effusive shield volcanics.

  20. Arsia Mons Ripples

    NASA Image and Video Library

    2012-02-17

    This image captured by NASA 2001 Mars Odyssey spacecraft shows a series of low, concentric ridges is located to the west of Arsia Mons. The origin of these features is unknown, and there are no similar features at the other Tharsis volcanoes.

  1. Multi-slice ultrasound image calibration of an intelligent skin-marker for soft tissue artefact compensation.

    PubMed

    Masum, M A; Pickering, M R; Lambert, A J; Scarvell, J M; Smith, P N

    2017-09-06

    In this paper, a novel multi-slice ultrasound (US) image calibration of an intelligent skin-marker used for soft tissue artefact compensation is proposed to align and orient image slices in an exact H-shaped pattern. Multi-slice calibration is complex, however, in the proposed method, a phantom based visual alignment followed by transform parameters estimation greatly reduces the complexity and provides sufficient accuracy. In this approach, the Hough Transform (HT) is used to further enhance the image features which originate from the image feature enhancing elements integrated into the physical phantom model, thus reducing feature detection uncertainty. In this framework, slice by slice image alignment and calibration are carried out and this provides manual ease and convenience. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Cryovolcanic features on Titan's surface as revealed by the Cassini Titan Radar Mapper

    USGS Publications Warehouse

    Lopes, R.M.C.; Mitchell, K.L.; Stofan, E.R.; Lunine, J.I.; Lorenz, R.; Paganelli, F.; Kirk, R.L.; Wood, C.A.; Wall, S.D.; Robshaw, L.E.; Fortes, A.D.; Neish, Catherine D.; Radebaugh, J.; Reffet, E.; Ostro, S.J.; Elachi, C.; Allison, M.D.; Anderson, Y.; Boehmer, R.; Boubin, G.; Callahan, P.; Encrenaz, P.; Flamini, E.; Francescetti, G.; Gim, Y.; Hamilton, G.; Hensley, S.; Janssen, M.A.; Johnson, W.T.K.; Kelleher, K.; Muhleman, D.O.; Ori, G.; Orosei, R.; Picardi, G.; Posa, F.; Roth, L.E.; Seu, R.; Shaffer, S.; Soderblom, L.A.; Stiles, B.; Vetrella, S.; West, R.D.; Wye, L.; Zebker, H.A.

    2007-01-01

    The Cassini Titan Radar Mapper obtained Synthetic Aperture Radar images of Titan's surface during four fly-bys during the mission's first year. These images show that Titan's surface is very complex geologically, showing evidence of major planetary geologic processes, including cryovolcanism. This paper discusses the variety of cryovolcanic features identified from SAR images, their possible origin, and their geologic context. The features which we identify as cryovolcanic in origin include a large (180 km diameter) volcanic construct (dome or shield), several extensive flows, and three calderas which appear to be the source of flows. The composition of the cryomagma on Titan is still unknown, but constraints on rheological properties can be estimated using flow thickness. Rheological properties of one flow were estimated and appear inconsistent with ammonia-water slurries, and possibly more consistent with ammonia-water-methanol slurries. The extent of cryovolcanism on Titan is still not known, as only a small fraction of the surface has been imaged at sufficient resolution. Energetic considerations suggest that cryovolcanism may have been a dominant process in the resurfacing of Titan. ?? 2006 Elsevier Inc.

  3. Hemorrhage detection in MRI brain images using images features

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  4. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  5. Characteristics of circular features on comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Deller, J. F.; Güttler, C.; Tubiana, C.; Hofmann, M.; Sierks, H.

    2017-09-01

    Comet 67P/Churyumov-Gerasimenko shows a large variety of circular structures such as pits, elevated roundish features in Imhotep, and even a single occurrence of a plausible fresh impact crater. Imaging the pits in the Ma'at region, aiming to understand their structure and origin drove the design of the final descent trajectory of the Rosetta spacecraft. The high-resolution images obtained during the last mission phase allow us to study these pits as exemplary circular features. A complete catalogue of circular features gives us the possibility to compare and classify these structures systematically.

  6. Securing SIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  7. SecSIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  8. Integration of adaptive guided filtering, deep feature learning, and edge-detection techniques for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing

    2017-11-01

    The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.

  9. Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.

    PubMed

    Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant

    2013-01-01

    Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.

  10. Image features dependant correlation-weighting function for efficient PRNU based source camera identification.

    PubMed

    Tiwari, Mayank; Gupta, Bhupendra

    2018-04-01

    For source camera identification (SCI), photo response non-uniformity (PRNU) has been widely used as the fingerprint of the camera. The PRNU is extracted from the image by applying a de-noising filter then taking the difference between the original image and the de-noised image. However, it is observed that intensity-based features and high-frequency details (edges and texture) of the image, effect quality of the extracted PRNU. This effects correlation calculation and creates problems in SCI. For solving this problem, we propose a weighting function based on image features. We have experimentally identified image features (intensity and high-frequency contents) effect on the estimated PRNU, and then develop a weighting function which gives higher weights to image regions which give reliable PRNU and at the same point it gives comparatively less weights to the image regions which do not give reliable PRNU. Experimental results show that the proposed weighting function is able to improve the accuracy of SCI up to a great extent. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  12. Rapid multi-modality preregistration based on SIFT descriptor.

    PubMed

    Chen, Jian; Tian, Jie

    2006-01-01

    This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.

  13. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  14. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  15. Visual mismatch negativity indicates automatic, task-independent detection of artistic image composition in abstract artworks.

    PubMed

    Menzel, Claudia; Kovács, Gyula; Amado, Catarina; Hayn-Leichsenring, Gregor U; Redies, Christoph

    2018-05-06

    In complex abstract art, image composition (i.e., the artist's deliberate arrangement of pictorial elements) is an important aesthetic feature. We investigated whether the human brain detects image composition in abstract artworks automatically (i.e., independently of the experimental task). To this aim, we studied whether a group of 20 original artworks elicited a visual mismatch negativity when contrasted with a group of 20 images that were composed of the same pictorial elements as the originals, but in shuffled arrangements, which destroy artistic composition. We used a passive oddball paradigm with parallel electroencephalogram recordings to investigate the detection of image type-specific properties. We observed significant deviant-standard differences for the shuffled and original images, respectively. Furthermore, for both types of images, differences in amplitudes correlated with the behavioral ratings of the images. In conclusion, we show that the human brain can detect composition-related image properties in visual artworks in an automatic fashion. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. What do you think of my picture? Investigating factors of influence in profile images context perception

    NASA Astrophysics Data System (ADS)

    Mazza, F.; Da Silva, M. P.; Le Callet, P.; Heynderickx, I. E. J.

    2015-03-01

    Multimedia quality assessment has been an important research topic during the last decades. The original focus on artifact visibility has been extended during the years to aspects as image aesthetics, interestingness and memorability. More recently, Fedorovskaya proposed the concept of 'image psychology': this concept focuses on additional quality dimensions related to human content processing. While these additional dimensions are very valuable in understanding preferences, it is very hard to define, isolate and measure their effect on quality. In this paper we continue our research on face pictures investigating which image factors influence context perception. We collected perceived fit of a set of images to various content categories. These categories were selected based on current typologies in social networks. Logistic regression was adopted to model category fit based on images features. In this model we used both low level and high level features, the latter focusing on complex features related to image content. In order to extract these high level features, we relied on crowdsourcing, since computer vision algorithms are not yet sufficiently accurate for the features we needed. Our results underline the importance of some high level content features, e.g. the dress of the portrayed person and scene setting, in categorizing image.

  17. Wild 2 Features

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    These images taken by NASA's Stardust spacecraft highlight the diverse features that make up the surface of comet Wild 2. Side A (see Figure 1) shows a variety of small pinnacles and mesas seen on the limb of the comet. Side B (see Figure 1) shows the location of a 2-kilometer (1.2-mile) series of aligned scarps, or cliffs, that are best seen in the stereo images.

  18. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  19. Analysis of Multispectral Galileo SSI Images of the Conamara Chaos Region, Europa

    NASA Technical Reports Server (NTRS)

    Spaun, N. A.; Phillips, C. B.

    2003-01-01

    Multispectral imaging of Europa s surface by Galileo s Solid State Imaging (SSI) camera has revealed two major surface color units, which appear as white and red-brown regions in enhanced color images of the surface (see figure). The Galileo Near- Infrared Mapping Spectrometer (NIMS) experiment suggests that the whitish material is icy, almost pure water ice, while the spectral signatures of the reddish regions are dominated by a non-ice material. Two endmember models have been proposed for the composition of the non-ice material: magnesium sulfate hydrates [1] and sulfuric acid and its byproducts [2]. There is also debate concerning whether the origin of this non-ice material is exogenic or endogenic [3].Goals: The key questions this work addresses are: 1) Is the non-ice material exogenic or endogenic in origin? 2) Once emplaced, is this non-ice material primarily modified by exogenic or endogenic processes? 3) Is the non-ice material within ridges, bands, chaos, and lenticulae the same non-ice material across all such geological features? 4) Does the distribution of the non-ice material provide any evidence for or against any of the various models for feature formation? 5) To what extent do the effects of scattered light in SSI images change the spectral signatures of geological features?

  20. Changes in selected features of a male face and assessment of their influence on facial recognition.

    PubMed

    Lewandowski, Zdzisław

    2011-01-01

    The project aimed at finding the answers to the following two research questions: --To what extent does a change in size, height or width of the selected face feature influence the assessment of likeness between an original composite portrait and a modified one? --How does the sex of a person who judges the images have an impact on the perception of likeness of the face features? The results indicate that there are significant differences in the assessment of likeness of the portraits with some features modified to the original ones. The images with changes in size and height of the nose received the lowest scores on the likeness scale, which indicates that these changes were perceived by the subjects as the most important. The photos with changes in height and width of the lips, and height and width of the eye slit, in turn, received high scores of likeness, in spite of big changes. This signifies that these modifications were perceived to be of the least importance (compared to the other features investigated).

  1. Feature-Based Retinal Image Registration Using D-Saddle Feature

    PubMed Central

    Hasikin, Khairunnisa; A. Karim, Noor Khairiah; Ahmedy, Fatimah

    2017-01-01

    Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle. PMID:29204257

  2. Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique

    NASA Astrophysics Data System (ADS)

    Nagashima, Hiroyuki; Harakawa, Tetsumi

    We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.

  3. Wild 2 Close Look

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    This image shows the comet Wild 2, which NASA's Stardust spacecraft flew by on Jan. 2, 2004. This image is the closest short exposure of the comet, taken at an11.4-degree phase angle, the angle between the camera, comet and the Sun. The listed names on the diagram (see Figure 1) are those used by the Stardust team to identify features. 'Basin' does not imply an impact origin.

  4. An Ensemble Method with Integration of Feature Selection and Classifier Selection to Detect the Landslides

    NASA Astrophysics Data System (ADS)

    Zhongqin, G.; Chen, Y.

    2017-12-01

    Abstract Quickly identify the spatial distribution of landslides automatically is essential for the prevention, mitigation and assessment of the landslide hazard. It's still a challenging job owing to the complicated characteristics and vague boundary of the landslide areas on the image. The high resolution remote sensing image has multi-scales, complex spatial distribution and abundant features, the object-oriented image classification methods can make full use of the above information and thus effectively detect the landslides after the hazard happened. In this research we present a new semi-supervised workflow, taking advantages of recent object-oriented image analysis and machine learning algorithms to quick locate the different origins of landslides of some areas on the southwest part of China. Besides a sequence of image segmentation, feature selection, object classification and error test, this workflow ensemble the feature selection and classifier selection. The feature this study utilized were normalized difference vegetation index (NDVI) change, textural feature derived from the gray level co-occurrence matrices (GLCM), spectral feature and etc. The improvement of this study shows this algorithm significantly removes some redundant feature and the classifiers get fully used. All these improvements lead to a higher accuracy on the determination of the shape of landslides on the high resolution remote sensing image, in particular the flexibility aimed at different kinds of landslides.

  5. Automatic crack detection and classification method for subway tunnel safety monitoring.

    PubMed

    Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun

    2014-10-16

    Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  6. Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring

    PubMed Central

    Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun

    2014-01-01

    Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification. PMID:25325337

  7. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  8. Developing a radiomics framework for classifying non-small cell lung carcinoma subtypes

    NASA Astrophysics Data System (ADS)

    Yu, Dongdong; Zang, Yali; Dong, Di; Zhou, Mu; Gevaert, Olivier; Fang, Mengjie; Shi, Jingyun; Tian, Jie

    2017-03-01

    Patient-targeted treatment of non-small cell lung carcinoma (NSCLC) has been well documented according to the histologic subtypes over the past decade. In parallel, recent development of quantitative image biomarkers has recently been highlighted as important diagnostic tools to facilitate histological subtype classification. In this study, we present a radiomics analysis that classifies the adenocarcinoma (ADC) and squamous cell carcinoma (SqCC). We extract 52-dimensional, CT-based features (7 statistical features and 45 image texture features) to represent each nodule. We evaluate our approach on a clinical dataset including 324 ADCs and 110 SqCCs patients with CT image scans. Classification of these features is performed with four different machine-learning classifiers including Support Vector Machines with Radial Basis Function kernel (RBF-SVM), Random forest (RF), K-nearest neighbor (KNN), and RUSBoost algorithms. To improve the classifiers' performance, optimal feature subset is selected from the original feature set by using an iterative forward inclusion and backward eliminating algorithm. Extensive experimental results demonstrate that radiomics features achieve encouraging classification results on both complete feature set (AUC=0.89) and optimal feature subset (AUC=0.91).

  9. Mars Pathfinder Landing Site and Surroundings

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA's Mars Pathfinder landed on Mars on July 4, 1997, and continued operating until Sept. 27 of that year. The landing site is on an ancient flood plain of the Ares and Tiu outflow channels. The High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter took an image on Dec. 21, 2006, that provides unprecedented detail of the geology of the region and hardware on the surface.

    [figure removed for brevity, see original site] HiRISE Image This is the entire image. The crater at center bottom was unofficially named 'Big Crater' by the Pathfinder team. Its wall was visible from Pathfinder, located 3 kilometers (2 miles) to the north. The two bright features to the upper left of Big Crater are the 'Twin Peaks,' also observed by Pathfinder. The bright mound to the upper right of the Twin Peaks is 'North Knob,' seen in Pathfinder images as peaking over the horizon.

    At this scale there is no obvious geologic evidence of an ancient flood. Rather, impact craters dominate the scene, attesting to an old surface. The age is probably on the order of 1.8 billion to 3.5 billion years, when the Ares and Tiu floods are estimated to have occurred. Wind-formed linear ripples and dunes are seen throughout and are concentrated within craters. Sets of polygonal ridges of enigmatic origin are seen east of the Pathfinder lander. Rocks are visible over the entire image, with heavy concentrations near fresh-looking craters. Most of them are probably blocks tossed outward by crater-forming impacts.

    The complete image is centered at 19.1 degrees north latitude, 326.8 degrees east longitude. The range to the target site was 284.7 kilometers (177.9 miles). At this distance the image scale is 28.5 centimeters (11 inches) per pixel, so objects about 85 centimeters (33 inches) across are resolved. The image shown here has been map-projected to 25 centimeters (10 inches) per pixel. North is up. The image was taken at a local Mars time of 3:35 p.m., and the scene is illuminated from the west with a solar incidence angle of 52 degrees, thus the sun was about 38 degrees above the horizon. At a solar longitude of 154.0 degrees, the season on Mars is northern summer.

    [figure removed for brevity, see original site] Landing Site Region This is a close-up of the area in the vicinity of the Pathfinder landing site. Major features are named. The white box outlines the area of the image, discussed next, where hardware is seen.

    [figure removed for brevity, see original site] Hardware on the Surface This image shows the Pathfinder lander on the surface. Zooming in, one can discern the ramps, science deck, and portions of the airbags on the Pathfinder lander. (See next image for closer view.) The back shell and parachute are to the south, and four features that may be portions of the heat shield are identified. Two of these were visible from Pathfinder. At the time of that mission, the nearest object was provisionally identified as the back shell. However, analysis of the HiRISE image and reinterpretation of Pathfinder images, plus an improved understanding of how hardware looks on the Martian surface based on ground-level and orbital images of the Mars Exploration Rover landing sites, indicate that the glint is bright enough that it may be insulating material from inside the heat shield. The back shell and parachute were out of sight behind a ridge from Pathfinder's ground view. One of the three bright features, identified as heat shield debris, was also identified during the Pathfinder mission.

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Annotated Version Unannotated Version Topographic Map of Landing Site Region Portions of the HiRISE image are overlaid onto color-coded topographic maps constructed by the U.S. Geological Survey from stereo images acquired by the Imager for Mars Pathfinder on the lander. The white feature at the center is Pathfinder lander. The scales on the x and y axes are in meters, with the lander as the zero point. The color code for elevation relative to the lander is different in the left and right images, and shown in meters underneath each image. The correspondence between the overhead view revealed by HiRISE and the positions of topographic features inferred almost a decade ago from Pathfinder's horizontal view of the landscape is striking. The close-up on the right complements panoramas taken by the lander's camera, including the accompanying composite version showing the Sojourner rover at various locations it reached during the mission.

    [figure removed for brevity, see original site] Mars Pathfinder Gallery Panorama This version of the Gallery Panorama taken with the lander's Imager for Mars Pathfinder camera shows many of the locations where the mission's Sojourner rover ended a Martian day during the 12-week mission. (There was only one Sojourner. The image is a composite.) One annotation indicates the last known position of Sojourner, near the rock 'Chimp,' at the time of the final data transmission from the lander. The location labeled 'Sojourner?' has been tentatively identified as the current position of the rover based on comparison of the ground-level view with the Dec. 21, 2006, image from NASA's Mars Reconnaissance Orbiter. At the proposed current location of the rover, a feature can be discerned in the 2006 orbital image that is about the right size for Sojourner and wasn't present when the Gallery Panorama was taken. Some rocks and other features that can be identified in the orbiter's high-resolution view are labeled in this ground-level view.

    [figure removed for brevity, see original site] Topographic Perspective of Landing Site Region) This is a perspective view based on the topographic map and artificial color derived from Pathfinder and other data. The vertical scale is exaggerated by a factor of three, compared with horizontal dimensions. The white feature at center is the Pathfinder lander. It appears flat because the topographic map derived from the Imager for Mars Pathfinder data did not include the spacecraft itself.

  10. No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.

    PubMed

    Liu, Tsung-Jung; Liu, Kuan-Hsien

    2018-03-01

    A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.

  11. Evaluation of solar angle variation over digital processing of LANDSAT imagery. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.

    1984-01-01

    The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Original images are transformed by means of digital filtering to enhance their spatial features. The resulting images are used to obtain an unsupervised classification of relief units. After defining relief classes, which are supposed to be spectrally different, topographic variables (declivity, altitude, relief range and slope length) are used to identify the true relief units existing on the ground. The samples are also clustered by means of an unsupervised classification option. The results obtained for each LANDSAT overpass are compared. Digital processing is highly affected by illumination geometry. There is no correspondence between relief units as defined by spectral features and those resulting from topographic features.

  12. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  13. Classifying brain metastases by their primary site of origin using a radiomics approach based on texture analysis: a feasibility study.

    PubMed

    Ortiz-Ramón, Rafael; Larroza, Andrés; Ruiz-España, Silvia; Arana, Estanislao; Moratal, David

    2018-05-14

    To examine the capability of MRI texture analysis to differentiate the primary site of origin of brain metastases following a radiomics approach. Sixty-seven untreated brain metastases (BM) were found in 3D T1-weighted MRI of 38 patients with cancer: 27 from lung cancer, 23 from melanoma and 17 from breast cancer. These lesions were segmented in 2D and 3D to compare the discriminative power of 2D and 3D texture features. The images were quantized using different number of gray-levels to test the influence of quantization. Forty-three rotation-invariant texture features were examined. Feature selection and random forest classification were implemented within a nested cross-validation structure. Classification was evaluated with the area under receiver operating characteristic curve (AUC) considering two strategies: multiclass and one-versus-one. In the multiclass approach, 3D texture features were more discriminative than 2D features. The best results were achieved for images quantized with 32 gray-levels (AUC = 0.873 ± 0.064) using the top four features provided by the feature selection method based on the p-value. In the one-versus-one approach, high accuracy was obtained when differentiating lung cancer BM from breast cancer BM (four features, AUC = 0.963 ± 0.054) and melanoma BM (eight features, AUC = 0.936 ± 0.070) using the optimal dataset (3D features, 32 gray-levels). Classification of breast cancer and melanoma BM was unsatisfactory (AUC = 0.607 ± 0.180). Volumetric MRI texture features can be useful to differentiate brain metastases from different primary cancers after quantizing the images with the proper number of gray-levels. • Texture analysis is a promising source of biomarkers for classifying brain neoplasms. • MRI texture features of brain metastases could help identifying the primary cancer. • Volumetric texture features are more discriminative than traditional 2D texture features.

  14. A parametric texture model based on deep convolutional features closely matches texture appearance for humans.

    PubMed

    Wallis, Thomas S A; Funke, Christina M; Ecker, Alexander S; Gatys, Leon A; Wichmann, Felix A; Bethge, Matthias

    2017-10-01

    Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.

  15. Microscopic medical image classification framework via deep learning and shearlet transform.

    PubMed

    Rezaeilouyeh, Hadi; Mollahosseini, Ali; Mahoor, Mohammad H

    2016-10-01

    Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape. An alternative approach is to use convolutional neural networks (CNNs) to learn the most appropriate feature abstractions directly from the data and handle the limitations of hand-crafted features. A framework for breast cancer detection and prostate Gleason grading using CNN trained on images along with the magnitude and phase of shearlet coefficients is presented. Particularly, we apply shearlet transform on images and extract the magnitude and phase of shearlet coefficients. Then we feed shearlet features along with the original images to our CNN consisting of multiple layers of convolution, max pooling, and fully connected layers. Our experiments show that using the magnitude and phase of shearlet coefficients as extra information to the network can improve the accuracy of detection and generalize better compared to the state-of-the-art methods that rely on hand-crafted features. This study expands the application of deep neural networks into the field of medical image analysis, which is a difficult domain considering the limited medical data available for such analysis.

  16. Lymphoma diagnosis in histopathology using a multi-stage visual learning approach

    NASA Astrophysics Data System (ADS)

    Codella, Noel; Moradi, Mehdi; Matasar, Matt; Sveda-Mahmood, Tanveer; Smith, John R.

    2016-03-01

    This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.

  17. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  18. A secure online image trading system for untrusted cloud environments.

    PubMed

    Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi

    2015-01-01

    In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.

  19. Enhancement of PET Images

    NASA Astrophysics Data System (ADS)

    Davis, Paul B.; Abidi, Mongi A.

    1989-05-01

    PET is the only imaging modality that provides doctors with early analytic and quantitative biochemical assessment and precise localization of pathology. In PET images, boundary information as well as local pixel intensity are both crucial for manual and/or automated feature tracing, extraction, and identification. Unfortunately, the present PET technology does not provide the necessary image quality from which such precise analytic and quantitative measurements can be made. PET images suffer from significantly high levels of radial noise present in the form of streaks caused by the inexactness of the models used in image reconstruction. In this paper, our objective is to model PET noise and remove it without altering dominant features in the image. The ultimate goal here is to enhance these dominant features to allow for automatic computer interpretation and classification of PET images by developing techniques that take into consideration PET signal characteristics, data collection, and data reconstruction. We have modeled the noise steaks in PET images in both rectangular and polar representations and have shown both analytically and through computer simulation that it exhibits consistent mapping patterns. A class of filters was designed and applied successfully. Visual inspection of the filtered images show clear enhancement over the original images.

  20. Advanced Techniques for Scene Analysis

    DTIC Science & Technology

    2010-06-01

    robustness prefers a bigger intergration window to handle larger motions. The advantage of pyramidal implementation is that, while each motion vector dL...labeled SAR images. Now the previous algorithm leads to a more dedicated classifier for the particular target; however, our algorithm trades generality for...accuracy is traded for generality. 7.3.2 I-RELIEF Feature weighting transforms the original feature vector x into a new feature vector x′ by assigning each

  1. Origin of Sinuous Channels on the SW Apron of Ascraeus Mons and the Surrounding Plains, Mars

    NASA Technical Reports Server (NTRS)

    Schierl, Z.; Signorella, J.; Collins, A.; Schwans, B.; de Wet, A. P.; Bleacher, J. E.

    2012-01-01

    Ascraeus Mons is one of three large shield volcanoes located along a NE-SW trending lineament atop the Tharsis Bulge on Mars. Spacecraft images, beginning with Viking in the 1970 s, revealed that the SW rift apron of Ascraeus Mons is cut by numerous sinuous channels, many of which originate from large, elongated, bowl shaped amphitheaters known as the Ascraeus Chasmata. A number of these channels can be traced onto the flatter plains to the east of the rift apron. These features have been interpreted as either fluvial [1] or volcanic [2] in origin. Most recently, it has been shown that one of the longest channels on the Ascraeus rift apron appears to transition into a roofed-over lava channel or lava tube at its distal end, and thus the entire feature is likely of a volcanic origin [2]. In addition, field observations of recent lava flows on Hawaii have shown that lava is capable of producing features such as the complex braided and anastomosing channels and streamlined islands that are observed in the Ascraeus features [2].

  2. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  3. An adaptive tensor voting algorithm combined with texture spectrum

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  4. Spatial and temporal coherence in perceptual binding

    PubMed Central

    Blake, Randolph; Yang, Yuede

    1997-01-01

    Component visual features of objects are registered by distributed patterns of activity among neurons comprising multiple pathways and visual areas. How these distributed patterns of activity give rise to unified representations of objects remains unresolved, although one recent, controversial view posits temporal coherence of neural activity as a binding agent. Motivated by the possible role of temporal coherence in feature binding, we devised a novel psychophysical task that requires the detection of temporal coherence among features comprising complex visual images. Results show that human observers can more easily detect synchronized patterns of temporal contrast modulation within hybrid visual images composed of two components when those components are drawn from the same original picture. Evidently, time-varying changes within spatially coherent features produce more salient neural signals. PMID:9192701

  5. Remote sensing fusion based on guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhao, Wenfei; Dai, Qinling; Wang, Leiguang

    2015-12-01

    In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.

  6. Chinese character recognition based on Gabor feature extraction and CNN

    NASA Astrophysics Data System (ADS)

    Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan

    2018-03-01

    As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.

  7. Evaluation of entropy and JM-distance criterions as features selection methods using spectral and spatial features derived from LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.

  8. Color heterogeneity of the surface of Phobos - Relationships to geologic features and comparison to meteorite analogs

    NASA Technical Reports Server (NTRS)

    Murchie, Scott L.; Britt, Daniel T.; Head, James W.; Pratt, Stephen F.; Fisher, Paul C.

    1991-01-01

    Color ratio images created from multispectral observations of Phobos are analyzed in order to characterize the spectral properties of Phobos' surface, to assess their spatial distributions and relationships with geologic features, and to compare Phobos' surface materials with possible meteorite analogs. Data calibration and processing is briefly discussed, and the observed spectral properties of Phobos and their lateral variations are examined. Attention is then given to the color properties of different types of impact craters, the origin of lateral variations in surface color, the relation between the spatial distribution of color properties and independently identifiable geologic features, and the relevance of color variation spatial distribution to the origin of the grooves.

  9. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    NASA Astrophysics Data System (ADS)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  10. Rapid Changes in Mercury's Sodium Exosphere

    NASA Technical Reports Server (NTRS)

    Potter, Drew

    2000-01-01

    Sodium in the atmosphere of Mercury can be detected by sunlight scattered in the D1 and D2 resonance lines. Images of the sodium emission show that the sodium density changes from day to day and is often concentrated in regions at high or mid latitudes. Drew Potter (NASA/JSC) and Tom Morgan (SWRI) suggested that sputtering by magnetospheric particles was the origin of the sodium. A problem with this is that the magnetic field of Mercury is strong enough that it is believed to shield the surface from solar particles much of the time, although particle precipitation at the magnetospheric cusps could deposit particles to the surface at high latitudes. Ann Sprague (UA/LPL) noted that the "spots" of sodium emission tended to coincide with major geologic features, such as the Caloris Basin. She proposed that the sodium is released from sodiumrich surface rocks that are associated with these features; however, some spots have appeared where there are no obvious geologic features. Some of the difficulty in ascribing a source for the sodium arises from the effect of terrestrial atmospheric blurring of the image. It is hard to tell exactly where the sodium emission originates after the atmosphere has blurred the image. Potter, Killen (SWRI), and Morgan recently developed a technique for correcting sodium images for atmospheric blurring, using images made with a large-area image slicer. They applied this technique to a series of Mercury sodium observations made in November, 1997 at the McMath-Pierce Solar Telescope. Their technique for producing images from the spectroscopic data provides images of both the sodium emission and of the sunlight reflected from the surface.

  11. A detail enhancement and dynamic range adjustment algorithm for high dynamic range images

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Wang, Huachuang; Liang, Mingtao; Yu, Cong; Hu, Jinlong; Cheng, Hua

    2014-08-01

    Although high dynamic range (HDR) images contain large amounts of information, they have weak texture and low contrast. What's more, these images are difficult to be reproduced on low dynamic range displaying mediums. If much more information is to be acquired when these images are displayed on PCs, some specific transforms, such as compressing the dynamic range, enhancing the portions of little difference in original contrast and highlighting the texture details on the premise of keeping the parts of large contrast, are needed. To this ends, a multi-scale guided filter enhancement algorithm which derives from the single-scale guided filter based on the analysis of non-physical model is proposed in this paper. Firstly, this algorithm decomposes the original HDR images into base image and detail images of different scales, and then it adaptively selects a transform function which acts on the enhanced detail images and original images. By comparing the treatment effects of HDR images and low dynamic range (LDR) images of different scene features, it proves that this algorithm, on the basis of maintaining the hierarchy and texture details of images, not only improves the contrast and enhances the details of images, but also adjusts the dynamic range well. Thus, it is much suitable for human observation or analytical processing of machines.

  12. Multi-layer cube sampling for liver boundary detection in PET-CT images.

    PubMed

    Liu, Xinxin; Yang, Jian; Song, Shuang; Song, Hong; Ai, Danni; Zhu, Jianjun; Jiang, Yurong; Wang, Yongtian

    2018-06-01

    Liver metabolic information is considered as a crucial diagnostic marker for the diagnosis of fever of unknown origin, and liver recognition is the basis of automatic diagnosis of metabolic information extraction. However, the poor quality of PET and CT images is a challenge for information extraction and target recognition in PET-CT images. The existing detection method cannot meet the requirement of liver recognition in PET-CT images, which is the key problem in the big data analysis of PET-CT images. A novel texture feature descriptor called multi-layer cube sampling (MLCS) is developed for liver boundary detection in low-dose CT and PET images. The cube sampling feature is proposed for extracting more texture information, which uses a bi-centric voxel strategy. Neighbour voxels are divided into three regions by the centre voxel and the reference voxel in the histogram, and the voxel distribution information is statistically classified as texture feature. Multi-layer texture features are also used to improve the ability and adaptability of target recognition in volume data. The proposed feature is tested on the PET and CT images for liver boundary detection. For the liver in the volume data, mean detection rate (DR) and mean error rate (ER) reached 95.15 and 7.81% in low-quality PET images, and 83.10 and 21.08% in low-contrast CT images. The experimental results demonstrated that the proposed method is effective and robust for liver boundary detection.

  13. Propeller Belts of Saturn

    NASA Image and Video Library

    2017-05-10

    This view from NASA's Cassini spacecraft is the sharpest ever taken of belts of the features called propellers in the middle part of Saturn's A ring. The propellers are the small, bright features that look like double dashes, visible on both sides of the wave pattern that crosses the image diagonally from top to bottom. The original discovery of propellers in this region in Saturn's rings was made using several images taken from very close to the rings during Cassini's 2004 arrival at Saturn. Those discovery images were of low resolution and were difficult to interpret, and there were few clues as to how the small propellers seen in those images were related to the larger propellers Cassini observed later in the mission. This image, for the first time, shows swarms of propellers of a wide range of sizes, putting the ones Cassini observed in its Saturn arrival images in context. Scientists will use this information to derive a "particle size distribution" for propeller moons, which is an important clue to their origins. The image was taken using the Cassini spacecraft's narrow-angle camera on April 19. The view was has an image scale of 0.24 mile (385 meters) per pixel, and was taken at a sun-ring-spacecraft angle, or phase angle, of 108 degrees. The view looks toward a point approximately 80,000 miles (129,000 kilometers) from Saturn's center. https://photojournal.jpl.nasa.gov/catalog/PIA21448

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.

    Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less

  15. Salient object detection based on multi-scale contrast.

    PubMed

    Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long

    2018-05-01

    Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  17. The Origin of Low Altitude ENA Emissions from Storms in 2000-2005 as Observed by IMAGE/MENA

    NASA Astrophysics Data System (ADS)

    Perez, J. D.; Sheehan, M. M.; Jahn, J.; Mackler, D.; Pollock, C. J.

    2013-12-01

    Low Altitude Emissions (LAEs) are prevalent features of Energetic Neutral Atom (ENA) images of the inner magnetosphere. It is believed that they are created by precipitating ions that reach altitudes near 500 km and then charge exchange with oxygen atoms, subsequently escaping to be observed by satellite borne ENA imagers. In this study, LAEs from the MENA instrument onboard the IMAGE satellite are studied in order to learn about the origin of the precipitating ions. Using the Tsyganenko 05 magnetic field model, the bright pixels capturing the LAEs are mapped to the equator. The LAEs are believed to originate from ions near their mirroring point, i.e., with pitch angles near 90o. Therefore the angle between the line-of-sight and the magnetic field at the point of origin is used to further constrain possible magnetospheric regions that are the origin of the ENAs. By observing the time dependence of the strength and location of the LAEs during geomagnetic storms in the years 2000-2005, the dynamics of the emptying and filling of the loss cone by injected particles is observed. Thus, information regarding the coupling between the inner magnetosphere and the ionosphere is obtained.

  18. Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM

    NASA Astrophysics Data System (ADS)

    Shima, Yoshihiro

    2018-04-01

    Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.

  19. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    PubMed

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  20. Clustering approaches to feature change detection

    NASA Astrophysics Data System (ADS)

    G-Michael, Tesfaye; Gunzburger, Max; Peterson, Janet

    2018-05-01

    The automated detection of changes occurring between multi-temporal images is of significant importance in a wide range of medical, environmental, safety, as well as many other settings. The usage of k-means clustering is explored as a means for detecting objects added to a scene. The silhouette score for the clustering is used to define the optimal number of clusters that should be used. For simple images having a limited number of colors, new objects can be detected by examining the change between the optimal number of clusters for the original and modified images. For more complex images, new objects may need to be identified by examining the relative areas covered by corresponding clusters in the original and modified images. Which method is preferable depends on the composition and range of colors present in the images. In addition to describing the clustering and change detection methodology of our proposed approach, we provide some simple illustrations of its application.

  1. Feature evaluation of complex hysteresis smoothing and its practical applications to noisy SEM images.

    PubMed

    Suzuki, Kazuhiko; Oho, Eisaku

    2013-01-01

    Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.

  2. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  3. Functional imaging of the semantic system: retrieval of sensory-experienced and verbally learned knowledge.

    PubMed

    Noppeney, Uta; Price, Cathy J

    2003-01-01

    This paper considers how functional neuro-imaging can be used to investigate the organization of the semantic system and the limitations associated with this technique. The majority of the functional imaging studies of the semantic system have looked for divisions by varying stimulus category. These studies have led to divergent results and no clear anatomical hypotheses have emerged to account for the dissociations seen in behavioral studies. Only a few functional imaging studies have used task as a variable to differentiate the neural correlates of semantic features more directly. We extend these findings by presenting a new study that contrasts tasks that differentially weight sensory (color and taste) and verbally learned (origin) semantic features. Irrespective of the type of semantic feature retrieved, a common semantic system was activated as demonstrated in many previous studies. In addition, the retrieval of verbally learned, but not sensory-experienced, features enhanced activation in medial and lateral posterior parietal areas. We attribute these "verbally learned" effects to differences in retrieval strategy and conclude that evidence for segregation of semantic features at an anatomical level remains weak. We believe that functional imaging has the potential to increase our understanding of the neuronal infrastructure that sustains semantic processing but progress may require multiple experiments until a consistent explanatory framework emerges.

  4. A complete passive blind image copy-move forensics scheme based on compound statistics features.

    PubMed

    Peng, Fei; Nie, Yun-ying; Long, Min

    2011-10-10

    Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  6. Titan's topography as a clue to geologic processes and landscape evolution

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.

    2012-12-01

    Cassini has revealed a diversity of surface features on Titan rivaled by few bodies in the Solar System. Some of these features are readily identified: dunes, channels, lakes, seas, fresh impact craters, and mountains. Others are enigmatic and in some cases have sparked debate about their mode of origin. Given the limited resolution of the Cassini images, at best 300 m for synthetic aperture RADAR (SAR) images, it can be difficult to identify details that might confirm a particular mode of origin. Supplementing the images with topographic information provides an important and sometimes crucial clue to the origin and evolution of landforms. Topographic profiles from altimetry and SARTopo analysis of the images can shed light on simpler features (e.g., dunes) and led to the surprising conclusion that Titan's largest feature, Xanadu, is not elevated as had been supposed. For more complex structures, digital topographic models (DTMs) provide a full three-dimensional view. About 10% of Titan's surface has been imaged in stereo by RADAR, and we have produced DTMs of about 2% by analyzing these stereopairs. Analysis of the results within the Cassini RADAR team has shed light on a number of geologic problems: * Some putative volcanic features (e.g., the supposed dome Ganesa Macula and various diffuse surface flows) have been shown to lack the expected relief, greatly weakening the case for their volcanic origin. * Conversely, flows in Hotei Regio have been shown to tower over nearby fluvial channels, and those near Sotra Facula are associated with multiple edifices and caldera-like pits, strengthening the case for a volcanic origin. * Depths of the handful of definite impact craters measured so far range from Ganymede-like to nearly zero, and are statistically consistent with a process such as eolian deposition that would steadily reduce the crater depth rather than a process such as surface erosion that would tend to leave craters only partially filled. * Clustering of the small north-polar lakes at a few discrete levels, all of which are hundreds of meters above the major seas, suggests that these bodies of liquid are connected locally but not (over relevant timescales) regionally by subsurface flow. * Evidence for topographic "benches" at multiple levels around the seas suggests that the liquid level has fluctuated over time, perhaps as a result of inter-hemispheric transport of volatiles over multi-seasonal timescales. These examples come primarily from Titan's northern hemisphere and equatorial zone. Cassini's extended mission to date has yielded extensive coverage of the southern hemisphere that we have recently integrated into a global control network, allowing us to begin producing DTMs of multiple southern hemisphere sites with consistent absolute elevations. Of particular interest are apparent basins, for the most part empty of surface liquid, near the South Pole. Are the basin floors or possible shoreline features at consistent elevations? How do the depths and absolute elevations compare to Ontario Lacus and the other small lakes (including transient ones) in the south, and to the lakes and seas of the northern hemisphere? Topomapping now under way will help address these and other questions about the evolution of Titan's southern hemisphere and its volatile distribution over time.

  7. Illumination invariant feature point matching for high-resolution planetary remote sensing images

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zeng, Hai; Hu, Han

    2018-03-01

    Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.

  8. Sorted Index Numbers for Privacy Preserving Face Recognition

    NASA Astrophysics Data System (ADS)

    Wang, Yongjin; Hatzinakos, Dimitrios

    2009-12-01

    This paper presents a novel approach for changeable and privacy preserving face recognition. We first introduce a new method of biometric matching using the sorted index numbers (SINs) of feature vectors. Since it is impossible to recover any of the exact values of the original features, the transformation from original features to the SIN vectors is noninvertible. To address the irrevocable nature of biometric signals whilst obtaining stronger privacy protection, a random projection-based method is employed in conjunction with the SIN approach to generate changeable and privacy preserving biometric templates. The effectiveness of the proposed method is demonstrated on a large generic data set, which contains images from several well-known face databases. Extensive experimentation shows that the proposed solution may improve the recognition accuracy.

  9. Imaging features of non-traumatic vascular liver emergencies.

    PubMed

    Onur, Mehmet Ruhi; Karaosmanoglu, Ali Devrim; Akca, Onur; Ocal, Osman; Akpinar, Erhan; Karcaaltincaba, Musturay

    2017-05-01

    Acute non-traumatic liver disorders can originate from abnormalities of the hepatic artery, portal vein and hepatic veins. Ultrasonography and computed tomography can be used in non-traumatic acute vascular liver disorders according to patient status, indication and appropriateness of imaging modality. Awareness of the imaging findings, in the appropriate clinical context, is crucial for prompt and correct diagnosis, as delay may cause severe consequences with significant morbidity and mortality. This review article will discuss imaging algorithms, and multimodality imaging findings for suspected acute vascular disorders of the liver.

  10. Optical design and testing: introduction.

    PubMed

    Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin

    2014-10-10

    Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.

  11. Camouflaged target detection based on polarized spectral features

    NASA Astrophysics Data System (ADS)

    Tan, Jian; Zhang, Junping; Zou, Bin

    2016-05-01

    The polarized hyperspectral images (PHSI) include polarization, spectral, spatial and radiant features, which provide more information about objects and scenes than traditional intensity or spectrum ones. And polarization can suppress the background and highlight the object, leading to the high potential to improve camouflaged target detection. So polarized hyperspectral imaging technique has aroused extensive concern in the last few years. Nowadays, the detection methods are still not very mature, most of which are rooted in the detection of hyperspectral image. And before using these algorithms, Stokes vector is used to process the original four-dimensional polarized hyperspectral data firstly. However, when the data is large and complex, the amount of calculation and error will increase. In this paper, tensor is applied to reconstruct the original four-dimensional data into new three-dimensional data, then, the constraint energy minimization (CEM) is used to process the new data, which adds the polarization information to construct the polarized spectral filter operator and takes full advantages of spectral and polarized information. This way deals with the original data without extracting the Stokes vector, so as to reduce the computation and error greatly. The experimental results also show that the proposed method in this paper is more suitable for the target detection of the PHSI.

  12. MRI texture features as biomarkers to predict MGMT methylation status in glioblastomas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J., E-mail: bje@mayo.edu

    Purpose: Imaging biomarker research focuses on discovering relationships between radiological features and histological findings. In glioblastoma patients, methylation of the O{sup 6}-methylguanine methyltransferase (MGMT) gene promoter is positively correlated with an increased effectiveness of current standard of care. In this paper, the authors investigate texture features as potential imaging biomarkers for capturing the MGMT methylation status of glioblastoma multiforme (GBM) tumors when combined with supervised classification schemes. Methods: A retrospective study of 155 GBM patients with known MGMT methylation status was conducted. Co-occurrence and run length texture features were calculated, and both support vector machines (SVMs) and random forest classifiersmore » were used to predict MGMT methylation status. Results: The best classification system (an SVM-based classifier) had a maximum area under the receiver-operating characteristic (ROC) curve of 0.85 (95% CI: 0.78–0.91) using four texture features (correlation, energy, entropy, and local intensity) originating from the T2-weighted images, yielding at the optimal threshold of the ROC curve, a sensitivity of 0.803 and a specificity of 0.813. Conclusions: Results show that supervised machine learning of MRI texture features can predict MGMT methylation status in preoperative GBM tumors, thus providing a new noninvasive imaging biomarker.« less

  13. Improvement to the scanning electron microscope image adaptive Canny optimization colorization by pseudo-mapping.

    PubMed

    Lo, T Y; Sim, K S; Tso, C P; Nia, M E

    2014-01-01

    An improvement to the previously proposed adaptive Canny optimization technique for scanning electron microscope image colorization is reported. The additional feature, called pseudo-mapping technique, is that the grayscale markings are temporarily mapped to a set of pre-defined pseudo-color map as a mean to instill color information for grayscale colors in chrominance channels. This allows the presence of grayscale markings to be identified; hence optimization colorization of grayscale colors is made possible. This additional feature enhances the flexibility of scanning electron microscope image colorization by providing wider range of possible color enhancement. Furthermore, the nature of this technique also allows users to adjust the luminance intensities of selected region from the original image within certain extent. © 2014 Wiley Periodicals, Inc.

  14. Nature, distribution, and origin of Titan’s Undifferentiated Plains

    USGS Publications Warehouse

    Lopes, Rosaly; Malaska, M. J.; Solomonidou, A.; Le, Gall A.; Janssen, M.A.; Neish, Catherine D.; Turtle, E.P.; Birch, S. P. D.; Hayes, A.G.; Radebaugh, J.; Coustenis, A.; Schoenfeld, A.; Stiles, B.W.; Kirk, Randolph L.; Mitchell, K.L.; Stofan, E.R.; Lawrence, K. J.; ,

    2016-01-01

    The Undifferentiated Plains on Titan, first mapped by Lopes et al. (Lopes, R.M.C. et al., 2010. Icarus, 205, 540–588), are vast expanses of terrains that appear radar-dark and fairly uniform in Cassini Synthetic Aperture Radar (SAR) images. As a result, these terrains are often referred to as “blandlands”. While the interpretation of several other geologic units on Titan – such as dunes, lakes, and well-preserved impact craters – has been relatively straightforward, the origin of the Undifferentiated Plains has remained elusive. SAR images show that these “blandlands” are mostly found at mid-latitudes and appear relatively featureless at radar wavelengths, with no major topographic features. Their gradational boundaries and paucity of recognizable features in SAR data make geologic interpretation particularly challenging. We have mapped the distribution of these terrains using SAR swaths up to flyby T92 (July 2013), which cover >50% of Titan’s surface. We compared SAR images with other data sets where available, including topography derived from the SARTopo method and stereo DEMs, the response from RADAR radiometry, hyperspectral imaging data from Cassini’s Visual and Infrared Mapping Spectrometer (VIMS), and near infrared imaging from the Imaging Science Subsystem (ISS). We examined and evaluated different formation mechanisms, including (i) cryovolcanic origin, consisting of overlapping flows of low relief or (ii) sedimentary origins, resulting from fluvial/lacustrine or aeolian deposition, or accumulation of photolysis products created in the atmosphere. Our analysis indicates that the Undifferentiated Plains unit is consistent with a composition predominantly containing organic rather than icy materials and formed by depositional and/or sedimentary processes. We conclude that aeolian processes played a major part in the formation of the Undifferentiated Plains; however, other processes (fluvial, deposition of photolysis products) are likely to have contributed, possibly in differing proportions depending on location.

  15. A novel biomedical image indexing and retrieval system via deep preference learning.

    PubMed

    Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou

    2018-05-01

    The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Scanning technology selection impacts acceptability and usefulness of image-rich content.

    PubMed

    Alpi, Kristine M; Brown, James C; Neel, Jennifer A; Grindem, Carol B; Linder, Keith E; Harper, James B

    2016-01-01

    Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities.

  17. Image correlation microscopy for uniform illumination.

    PubMed

    Gaborski, T R; Sealander, M N; Ehrenberg, M; Waugh, R E; McGrath, J L

    2010-01-01

    Image cross-correlation microscopy is a technique that quantifies the motion of fluorescent features in an image by measuring the temporal autocorrelation function decay in a time-lapse image sequence. Image cross-correlation microscopy has traditionally employed laser-scanning microscopes because the technique emerged as an extension of laser-based fluorescence correlation spectroscopy. In this work, we show that image correlation can also be used to measure fluorescence dynamics in uniform illumination or wide-field imaging systems and we call our new approach uniform illumination image correlation microscopy. Wide-field microscopy is not only a simpler, less expensive imaging modality, but it offers the capability of greater temporal resolution over laser-scanning systems. In traditional laser-scanning image cross-correlation microscopy, lateral mobility is calculated from the temporal de-correlation of an image, where the characteristic length is the illuminating laser beam width. In wide-field microscopy, the diffusion length is defined by the feature size using the spatial autocorrelation function. Correlation function decay in time occurs as an object diffuses from its original position. We show that theoretical and simulated comparisons between Gaussian and uniform features indicate the temporal autocorrelation function depends strongly on particle size and not particle shape. In this report, we establish the relationships between the spatial autocorrelation function feature size, temporal autocorrelation function characteristic time and the diffusion coefficient for uniform illumination image correlation microscopy using analytical, Monte Carlo and experimental validation with particle tracking algorithms. Additionally, we demonstrate uniform illumination image correlation microscopy analysis of adhesion molecule domain aggregation and diffusion on the surface of human neutrophils.

  18. Evaluation of the effects of the seasonal variation of solar elevation angle and azimuth on the processes of digital filtering and thematic classification of relief units

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.

    1983-01-01

    The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Two sets of LANDSAT data referring to the orbit 150 and row 28 were selected with illumination parameters varying from 43 deg to 64 deg for azimuth and from 30 deg to 36 deg for solar elevation respectively. IMAGE-100 system permitted the digital processing of LANDSAT data. Original images were transformed by means of digital filtering so as to enhance their spatial features. The resulting images were used to obtain an unsupervised classification of relief units. Topographic variables (declivity, altitude, relief range and slope length) were used to identify the true relief units existing on the ground. The LANDSAT over pass data show that digital processing is highly affected by illumination geometry, and there is no correspondence between relief units as defined by spectral features and those resulting from topographic features.

  19. The Mechanism of Word Crowding

    PubMed Central

    Yu, Deyue; Akau, Melanie M. U.; Chung, Susana T. L.

    2011-01-01

    Word reading speed in peripheral vision is slower when words are in close proximity of other words (Chung, 2004). This word crowding effect could arise as a consequence of interaction of low-level letter features between words, or the interaction between high-level holistic representations of words. We evaluated these two hypotheses by examining how word crowding changes for five configurations of flanking words: the control condition — flanking words were oriented upright; scrambled — letters in each flanking word were scrambled in order; horizontal-flip — each flanking word was the left-right mirror-image of the original; letter-flip — each letter of the flanking word was the left-right mirror-image of the original; and vertical-flip — each flanking word was the up-down mirror-image of the original. The low-level letter feature interaction hypothesis predicts similar word crowding effect for all the different flanker configurations, while the high-level holistic representation hypothesis predicts less word crowding effect for all the alternative flanker conditions, compared with the control condition. We found that oral reading speed for words flanked above and below by other words, measured at 10° eccentricity in the nasal field, showed the same dependence on the vertical separation between the target and its flanking words, for the various flanker configurations. The result was also similar when we rotated the flanking words by 90° to disrupt the periodic vertical pattern, which presumably is the main structure in words. The remarkably similar word crowding effect irrespective of the flanker configurations suggests that word crowding arises as a consequence of interactions of low-level letter features. PMID:22079315

  20. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    PubMed Central

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  1. Featured Image: Diamonds in a Meteorite

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2018-04-01

    This unique image which measures only 60 x 80 micrometers across reveals details in the Kapoeta meteorite, an 11-kg stone that fell in South Sudan in 1942. The sparkle in the image? A cluster of nanodiamonds discovered embedded in the stone in a recent study led by Yassir Abdu (University of Sharjah, United Arab Emirates). Abdu and collaborators showed that these nanodiamonds have similar spectral features to the interiors of dense interstellar clouds and they dont show any signs of shock features. This may suggest that the nanodiamonds were formed by condensation of nebular gases early in the history of the solar system. The diamonds were trapped in the surface material of the Kapoeta meteorites parent body, thought to be the asteroid Vesta. To read more about the authors study, check out the original article below.CitationYassir A. Abdu et al 2018 ApJL 856 L9. doi:10.3847/2041-8213/aab433

  2. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  3. Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.

    PubMed

    Orbán, Levente L; Chartier, Sylvain

    2015-01-01

    Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.

  4. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  5. Ridges on Europa

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This is the highest resolution picture ever taken of the Jupiter moon, Europa. The area shown is about 5.9 by 9.9 miles (9.6 by 16 kilometers) and the smallest visible feature is about the size of a football field. In this view, the ice-rich surface has been broken into a complex pattern by cross-cutting ridges and grooves resulting from tectonic processes. Sinuous rille-like features and knobby terrain could result from surface modifications of unknown origins. Small craters of possible impact origin range in size from less than 330 feet (100 meters) to about 1300 feet (400 meters) across are visible.

    This image was taken by the solid state imaging television camera aboard the Galileo during its fourth orbit around Jupiter, at a distance of 2060 miles (3340 kilometers). The picture is centered at 325 degrees West, 5.83 degrees North. North is toward the top of this image, with the sun shining from the right.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the Galileo mission home page on the World Wide Web at http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  6. Automated dynamic feature tracking of RSLs on the Martian surface through HiRISE super-resolution restoration and 3D reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.

    2017-09-01

    In this paper, we demonstrate novel Super-resolution restoration and 3D reconstruction tools developed within the EU FP7 projects and their applications to advanced dynamic feature tracking through HiRISE repeat stereo. We show an example with one of the RSL sites in the Palikir Crater took 8 repeat-pass 25cm HiRISE images from which a 5cm RSL-free SRR image is generated using GPT-SRR. Together with repeat 3D modelling of the same area, it allows us to overlay tracked dynamic features onto the reconstructed "original" surface, providing a much more comprehensive interpretation of the surface formation processes in 3D.

  7. Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images

    PubMed Central

    Funahashi, Shintaro

    2016-01-01

    Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424

  8. Semi-Automatic Normalization of Multitemporal Remote Images Based on Vegetative Pseudo-Invariant Features

    PubMed Central

    Garcia-Torres, Luis; Caballero-Novella, Juan J.; Gómez-Candón, David; De-Castro, Ana Isabel

    2014-01-01

    A procedure to achieve the semi-automatic relative image normalization of multitemporal remote images of an agricultural scene called ARIN was developed using the following procedures: 1) defining the same parcel of selected vegetative pseudo-invariant features (VPIFs) in each multitemporal image; 2) extracting data concerning the VPIF spectral bands from each image; 3) calculating the correction factors (CFs) for each image band to fit each image band to the average value of the image series; and 4) obtaining the normalized images by linear transformation of each original image band through the corresponding CF. ARIN software was developed to semi-automatically perform the ARIN procedure. We have validated ARIN using seven GeoEye-1 satellite images taken over the same location in Southern Spain from early April to October 2010 at an interval of approximately 3 to 4 weeks. The following three VPIFs were chosen: citrus orchards (CIT), olive orchards (OLI) and poplar groves (POP). In the ARIN-normalized images, the range, standard deviation (s. d.) and root mean square error (RMSE) of the spectral bands and vegetation indices were considerably reduced compared to the original images, regardless of the VPIF or the combination of VPIFs selected for normalization, which demonstrates the method’s efficacy. The correlation coefficients between the CFs among VPIFs for any spectral band (and all bands overall) were calculated to be at least 0.85 and were significant at P = 0.95, indicating that the normalization procedure was comparably performed regardless of the VPIF chosen. ARIN method was designed only for agricultural and forestry landscapes where VPIFs can be identified. PMID:24604031

  9. A medical ontology for intelligent web-based skin lesions image retrieval.

    PubMed

    Maragoudakis, Manolis; Maglogiannis, Ilias

    2011-06-01

    Researchers have applied increasing efforts towards providing formal computational frameworks to consolidate the plethora of concepts and relations used in the medical domain. In the domain of skin related diseases, the variability of semantic features contained within digital skin images is a major barrier to the medical understanding of the symptoms and development of early skin cancers. The desideratum of making these standards machine-readable has led to their formalization in ontologies. In this work, in an attempt to enhance an existing Core Ontology for skin lesion images, hand-coded from image features, high quality images were analyzed by an autonomous ontology creation engine. We show that by exploiting agglomerative clustering methods with distance criteria upon the existing ontological structure, the original domain model could be enhanced with new instances, attributes and even relations, thus allowing for better classification and retrieval of skin lesion categories from the web.

  10. Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging

    PubMed Central

    Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.

    2014-01-01

    Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321

  11. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  12. Somatomedin C deficiency in Asian sisters.

    PubMed Central

    McGraw, M E; Price, D A; Hill, D J

    1986-01-01

    Two sisters of Asian origin showed typical clinical and biochemical features of primary somatomedin C (SM-C) deficiency (Laron dwarfism). Abnormalities of SM-C binding proteins were observed, one sister lacking the high molecular weight (150 Kd) protein. Images Figure PMID:2434036

  13. Hop, Skip and Jump: Animation Software.

    ERIC Educational Resources Information Center

    Eiser, Leslie

    1986-01-01

    Discusses the features of animation software packages, reviewing eight commercially available programs. Information provided for each program includes name, publisher, current computer(s) required, cost, documentation, input device, import/export capabilities, printing possibilities, what users can originate, types of image manipulation possible,…

  14. Non-rigid ultrasound image registration using generalized relaxation labeling process

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Ha; Seong, Yeong Kyeong; Park, MoonHo; Woo, Kyoung-Gu; Ku, Jeonghun; Park, Hee-Jun

    2013-03-01

    This research proposes a novel non-rigid registration method for ultrasound images. The most predominant anatomical features in medical images are tissue boundaries, which appear as edges. In ultrasound images, however, other features can be identified as well due to the specular reflections that appear as bright lines superimposed on the ideal edge location. In this work, an image's local phase information (via the frequency domain) is used to find the ideal edge location. The generalized relaxation labeling process is then formulated to align the feature points extracted from the ideal edge location. In this work, the original relaxation labeling method was generalized by taking n compatibility coefficient values to improve non-rigid registration performance. This contextual information combined with a relaxation labeling process is used to search for a correspondence. Then the transformation is calculated by the thin plate spline (TPS) model. These two processes are iterated until the optimal correspondence and transformation are found. We have tested our proposed method and the state-of-the-art algorithms with synthetic data and bladder ultrasound images of in vivo human subjects. Experiments show that the proposed method improves registration performance significantly, as compared to other state-of-the-art non-rigid registration algorithms.

  15. PRE-ERUPTION OSCILLATIONS IN THIN AND LONG FEATURES IN A QUIESCENT FILAMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Anand D.; Hanaoka, Yoichiro; Suematsu, Yoshinori

    We investigate the eruption of a quiescent filament located close to an active region. Large-scale activation was observed in only half of the filament in the form of pre-eruption oscillations. Consequently only this half erupted nearly 30 hr after the oscillations commenced. Time-slice diagrams of 171 Å images from the Atmospheric Imaging Assembly were used to study the oscillations. These were observed in several thin and long features connecting the filament spine to the chromosphere below. This study traces the origin of such features and proposes their possible interpretation. Small-scale magnetic flux cancellation accompanied by a brightening was observed atmore » the footpoint of the features shortly before their appearance, in images recorded by the Helioseismic and Magnetic Imager. A slow rise of the filament was detected in addition to the oscillations, indicating a gradual loss of equilibrium. Our analysis indicates that a change in magnetic field connectivity between two neighbouring active regions and the quiescent filament resulted in a weakening of the overlying arcade of the filament, leading to its eruption. It is also suggested that the oscillating features are filament barbs, and the oscillations are a manifestation during the pre-eruption phase of the filaments.« less

  16. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  17. Earth Observations taken by the Expedition 10 crew

    NASA Image and Video Library

    2004-12-25

    ISS010-E-12103 (25 December 2004) --- Seoul, South Korea is featured in this digital image photographed by an Expedition 10 crewmember on the International Space Station. This photograph illustrates the Seoul (originally known as Hanyang) urban area at night. Major roadways and river courses (such as the Han River) are clearly outlined by street lights, while the brightest lights indicate the downtown urban core (center of image) and large industrial complexes. Very dark regions in the image are mountains or large bodies of water.

  18. Fusion of shallow and deep features for classification of high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang

    2018-02-01

    Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.

  19. Comparison of 2D and 3D wavelet features for TLE lateralization

    NASA Astrophysics Data System (ADS)

    Jafari-Khouzani, Kourosh; Soltanian-Zadeh, Hamid; Elisevich, Kost; Patel, Suresh

    2004-04-01

    Intensity and volume features of the hippocampus from MR images of the brain are known to be useful in detecting the abnormality and consequently candidacy of the hippocampus for temporal lobe epilepsy surgery. However, currently, intracranial EEG exams are required to determine the abnormal hippocampus. These exams are lengthy, painful and costly. The aim of this study is to evaluate texture characteristics of the hippocampi from MR images to help physicians determine the candidate hippocampus for surgery. We studied the MR images of 20 epileptic patients. Intracranial EEG results as well as surgery outcome were used as gold standard. The hippocampi were manually segmented by an expert from T1-weighted MR images. Then the segmented regions were mapped on the corresponding FLAIR images for texture analysis. We calculate the average energy features from 2D wavelet transform of each slice of hippocampus as well as the energy features produced by 3D wavelet transform of the whole hippocampus volume. The 2D wavelet transform is calculated both from the original slices as well as from the slices perpendicular to the principal axis of the hippocampus. In order to calculate the 3D wavelet transform we first rotate each hippocampus to fit it in a rectangular prism and then fill the empty area by extrapolating the intensity values. We combine the resulting features with volume feature and compare their ability to distinguish between normal and abnormal hippocampi using linear classifier and fuzzy c-means clustering algorithm. Experimental results show that the texture features can correctly classify the hippocampi.

  20. Forensic Analysis of the Sony Playstation Portable

    NASA Astrophysics Data System (ADS)

    Conrad, Scott; Rodriguez, Carlos; Marberry, Chris; Craiger, Philip

    The Sony PlayStation Portable (PSP) is a popular portable gaming device with features such as wireless Internet access and image, music and movie playback. As with most systems built around a processor and storage, the PSP can be used for purposes other than it was originally intended - legal as well as illegal. This paper discusses the features of the PSP browser and suggests best practices for extracting digital evidence.

  1. Scanning technology selection impacts acceptability and usefulness of image-rich content*†

    PubMed Central

    Alpi, Kristine M.; Brown, James C.; Neel, Jennifer A.; Grindem, Carol B.; Linder, Keith E.; Harper, James B.

    2016-01-01

    Objective Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Methods Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Results Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). Conclusions PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities. PMID:26807048

  2. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  3. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.

  4. An improved ASIFT algorithm for indoor panorama image matching

    NASA Astrophysics Data System (ADS)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  5. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    PubMed

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na + or Ca ++ within a single cell, as well as to the analysis of spreading activity (e.g., Ca ++ waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  6. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  7. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  8. Red Arcs on Tethys

    NASA Image and Video Library

    2015-07-29

    Unusual arc-shaped, reddish streaks cut across the surface of Saturn's ice-rich moon Tethys in this enhanced-color mosaic. The red streaks are narrow, curved lines on the moon's surface, only a few miles (or kilometers) wide but several hundred miles (or kilometers) long. The red streaks are among the most unusual color features on Saturn's moons to be revealed by Cassini's cameras. A few of the red arcs can be faintly seen in Cassini imaging observations made earlier in the mission, but the color images for this observation, which were obtained in April 2015, were the first to show large northern areas of Tethys under the illumination and viewing conditions necessary to see the features clearly. As the Saturn system moved into its northern hemisphere summer over the past few years, northern latitudes have become increasingly well illuminated. As a result, the red arc features have become clearly visible for the first time. The origin of the features and their reddish color is currently a mystery to Cassini scientists. Possibilities being studied include ideas that the reddish material is exposed ice with chemical impurities, or the result of outgassing from inside Tethys. The streaks could also be associated with features like fractures that are below the resolution of the available images. Except for a few small craters on Dione, reddish tinted features are rare on other moons of Saturn. However, many reddish features are observed on the geologically young surface of Jupiter's moon Europa. Images taken using clear, green, infrared and ultraviolet spectral filters were combined to create the view, which highlights subtle color differences across Tethys' surface at wavelengths not visible to human eyes. The moon's surface is fairly uniform in natural color. The yellowish tones on the left side of the view are a result of alteration of the moon's surface by high-energy particles from Saturn's magnetosphere. This particle radiation slams into the moon's trailing hemisphere, modifying it chemically and changing its appearance in enhanced-color views like this one. The area of Tethys shown here is centered on 30 degrees north latitude, 187 degrees west longitude, and measures 305 by 258 miles (490 by 415 kilometers) across. The original color images were obtained at a resolution of about 2,300 feet (700 meters) per pixel on April 11, 2015. This is a cropped close-up of an area visible in PIA19636. This is a mosaic of images that have been photometrically calibrated and map-projected. http://photojournal.jpl.nasa.gov/catalog/PIA19637

  9. ROC analysis of lesion descriptors in breast ultrasound images

    NASA Astrophysics Data System (ADS)

    Andre, Michael P.; Galperin, Michael; Phan, Peter; Chiu, Peter

    2003-05-01

    Breast biopsy serves as the key diagnostic tool in the evaluation of breast masses for malignancy, yet the procedure affects patients physically and emotionally and may obscure results of future mammograms. Studies show that high quality ultrasound can distinguish a benign from malignant lesions with accuracy, however, it has proven difficult to teach and clinical results are highly variable. The purpose of this study is to develop a means to optimize an automated Computer Aided Imaging System (CAIS) to assess Level of Suspicion (LOS) of a breast mass. We examine the contribution of 15 object features to lesion classification by calculating the Wilcoxon area under the ROC curve, AW, for all combinations in a set of 146 masses with known findings. For each interval A, the frequency of appearance of each feature and its combinations with others was computed as a means to find an "optimum" feature vector. The original set of 15 was reduced to 6 (area, perimeter, diameter ferret Y, relief, homogeneity, average energy) with an improvement from Aw=0.82-/+0.04 for the original 15 to Aw=0.93-/+0.02 for the subset of 6, p=0.03. For comparison, two sub-specialty mammography radiologists also scored the images for LOS resulting in Az of 0.90 and 0.87. The CAIS performed significantly higher, p=0.02.

  10. Rind-Like Features at a Meridiani Outcrop

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Annotated image of PIA04189 Rind-Like Features at a Meridiani Outcrop

    After months spent crossing a sea of rippled sands, Opportunity reached an outcrop in August 2005 and began investigating exposures of sedimentary rocks, intriguing rind-like features that appear to cap the rocks, and cobbles that dot the martian surface locally. Opportunity spent several martian days, or sols, analyzing a feature called 'Lemon Rind,' a thin surface layer covering portions of outcrop rocks poking through the sand north of 'Erebus Crater.' In images from the panoramic camera, Lemon Rind appears slightly different in color than surrounding rocks. It also appears to be slightly more resistant to wind erosion than the outcrop's interior. This is an approximately true-color composite produced from frames taken during Opportunity's 552nd martian day, or sol (Aug. 13, 2005).

  11. Observational Tests of the Mars Ocean Hypothesis: Selected MOC and MOLA Results

    NASA Technical Reports Server (NTRS)

    Parker, T. J.; Banerdt, W. B.

    1999-01-01

    We have begun a detailed analysis of the evidence for and topography of features identified as potential shorelines that have been im-aged by the Mars Orbiter Camera (MOC) during the Aerobraking Hiatus and Science Phasing Orbit periods of the Mars Global Surveyor (MGS) mission. MOC images, comparable in resolution to high-altitude terrestrial aerial photographs, are particularly well suited to address the morphological expressions of these features at scales comparable to known shore morphologies on Earth. Particularly useful are examples of detailed relationships between potential shore features, such as erosional (and depositional) terraces have been cut into "familiar" pre-existing structures and topography in a fashion that points to a shoreline interpretation as the most likely mechanism for their formation. Additional information is contained in the original extended abstract.

  12. An integrated one-step system to extract, analyze and annotate all relevant information from image-based cell screening of chemical libraries.

    PubMed

    Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen

    2010-04-01

    Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.

  13. Deep feature classification of angiomyolipoma without visible fat and renal cell carcinoma in abdominal contrast-enhanced CT images with texture image patches and hand-crafted feature concatenation.

    PubMed

    Lee, Hansang; Hong, Helen; Kim, Junmo; Jung, Dae Chul

    2018-04-01

    To develop an automatic deep feature classification (DFC) method for distinguishing benign angiomyolipoma without visible fat (AMLwvf) from malignant clear cell renal cell carcinoma (ccRCC) from abdominal contrast-enhanced computer tomography (CE CT) images. A dataset including 80 abdominal CT images of 39 AMLwvf and 41 ccRCC patients was used. We proposed a DFC method for differentiating the small renal masses (SRM) into AMLwvf and ccRCC using the combination of hand-crafted and deep features, and machine learning classifiers. First, 71-dimensional hand-crafted features (HCF) of texture and shape were extracted from the SRM contours. Second, 1000-4000-dimensional deep features (DF) were extracted from the ImageNet pretrained deep learning model with the SRM image patches. In DF extraction, we proposed the texture image patches (TIP) to emphasize the texture information inside the mass in DFs and reduce the mass size variability. Finally, the two features were concatenated and the random forest (RF) classifier was trained on these concatenated features to classify the types of SRMs. The proposed method was tested on our dataset using leave-one-out cross-validation and evaluated using accuracy, sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and area under receiver operating characteristics curve (AUC). In experiments, the combinations of four deep learning models, AlexNet, VGGNet, GoogleNet, and ResNet, and four input image patches, including original, masked, mass-size, and texture image patches, were compared and analyzed. In qualitative evaluation, we observed the change in feature distributions between the proposed and comparative methods using tSNE method. In quantitative evaluation, we evaluated and compared the classification results, and observed that (a) the proposed HCF + DF outperformed HCF-only and DF-only, (b) AlexNet showed generally the best performances among the CNN models, and (c) the proposed TIPs not only achieved the competitive performances among the input patches, but also steady performance regardless of CNN models. As a result, the proposed method achieved the accuracy of 76.6 ± 1.4% for the proposed HCF + DF with AlexNet and TIPs, which improved the accuracy by 6.6%p and 8.3%p compared to HCF-only and DF-only, respectively. The proposed shape features and TIPs improved the HCFs and DFs, respectively, and the feature concatenation further enhanced the quality of features for differentiating AMLwvf from ccRCC in abdominal CE CT images. © 2018 American Association of Physicists in Medicine.

  14. Evidence for Basinwide Mud Volcanism in Acidalia Planitia, Mars

    NASA Technical Reports Server (NTRS)

    Oehler, Dorothy Z.; Allen, Carlton C.

    2010-01-01

    High-albedo mounds in Acidalia Planitia occur in enormous numbers. They have been variously interpreted as pseudocraters, cinder cones, tuff cones, pingos, ice disintegration features, or mud volcanoes. Our work uses regional mapping, basin analysis, and new data from the Context Camera (CTX), High Resolution Imaging Science Experiment (HiRISE), and Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) to re-assess the origin and significance of these structures.

  15. The mechanism of word crowding.

    PubMed

    Yu, Deyue; Akau, Melanie M U; Chung, Susana T L

    2012-01-01

    Word reading speed in peripheral vision is slower when words are in close proximity of other words (Chung, 2004). This word crowding effect could arise as a consequence of interaction of low-level letter features between words, or the interaction between high-level holistic representations of words. We evaluated these two hypotheses by examining how word crowding changes for five configurations of flanking words: the control condition - flanking words were oriented upright; scrambled - letters in each flanking word were scrambled in order; horizontal-flip - each flanking word was the left-right mirror-image of the original; letter-flip - each letter of the flanking word was the left-right mirror-image of the original; and vertical-flip - each flanking word was the up-down mirror-image of the original. The low-level letter feature interaction hypothesis predicts similar word crowding effect for all the different flanker configurations, while the high-level holistic representation hypothesis predicts less word crowding effect for all the alternative flanker conditions, compared with the control condition. We found that oral reading speed for words flanked above and below by other words, measured at 10° eccentricity in the nasal field, showed the same dependence on the vertical separation between the target and its flanking words, for the various flanker configurations. The result was also similar when we rotated the flanking words by 90° to disrupt the periodic vertical pattern, which presumably is the main structure in words. The remarkably similar word crowding effect irrespective of the flanker configurations suggests that word crowding arises as a consequence of interactions of low-level letter features. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  17. Ripples in Rocks Point to Water

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image taken by the Mars Exploration Rover Opportunity's panoramic camera shows the rock nicknamed 'Last Chance,' which lies within the outcrop near the rover's landing site at Meridiani Planum, Mars. The image provides evidence for a geologic feature known as ripple cross-stratification. At the base of the rock, layers can be seen dipping downward to the right. The bedding that contains these dipping layers is only one to two centimeters (0.4 to 0.8 inches) thick. In the upper right corner of the rock, layers also dip to the right, but exhibit a weak 'concave-up' geometry. These two features -- the thin, cross-stratified bedding combined with the possible concave geometry -- suggest small ripples with sinuous crest lines. Although wind can produce ripples, they rarely have sinuous crest lines and never form steep, dipping layers at this small scale. The most probable explanation for these ripples is that they were formed in the presence of moving water.

    Crossbedding Evidence for Underwater Origin Interpretations of cross-lamination patterns presented as clues to this martian rock's origin under flowing water are marked on images taken by the panoramic camera and microscopic imager on NASA's Opportunity.

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2

    The red arrows (Figure 1) point to features suggesting cross-lamination within the rock called 'Last Chance' taken at a distance of 4.5 meters (15 feet) during Opportunity's 17th sol (February 10, 2004). The inferred sets of fine layers at angles to each other (cross-laminae) are up to 1.4 centimeters (half an inch) thick. For scale, the distance between two vertical cracks in the rock is about 7 centimeters (2.8 inches). The feature indicated by the middle red arrow suggests a pattern called trough cross-lamination, likely produced when flowing water shaped sinuous ripples in underwater sediment and pushed the ripples to migrate in one direction. The direction of the ancient flow would have been either toward or away from the line of sight from this perspective. The lower and upper red arrows point to cross-lamina sets that are consistent with underwater ripples in the sediment having moved in water that was flowing left to right from this perspective.

    The yellow arrows (Figure 2) indicate places in the panoramic camera view that correlate with places in the microscope's view of the same rock.

    [figure removed for brevity, see original site] Figure 3

    The microscopic view (Figure 3) is a mosaic of some of the 152 microscopic imager frames of 'Last Chance' that Opportunity took on sols 39 and 40 (March 3 and 4, 2004).

    [figure removed for brevity, see original site] Figure 4

    Figure 4 shows cross-lamination expressed by lines that trend downward from left to right, traced with black lines in the interpretive overlay. These cross-lamination lines are consistent with dipping planes that would have formed surfaces on the down-current side of migrating ripples. Interpretive blue lines indicate boundaries between possible sets of cross-laminae.

  18. Incoherent optical generalized Hough transform: pattern recognition and feature extraction applications

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel; Ferrari, José A.

    2017-05-01

    Pattern recognition and feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital-only methods. We explore an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a pupil mask implemented on a high-contrast spatial light modulator for orientation/shape variation of the template. Real-time can also be achieved. In addition, by thresholding of the GHT and optically inverse transforming, the previously detected features of interest can be extracted.

  19. Pattern, age, and origin of structural features within the Ozark plateau and the relationship to ore deposits

    NASA Technical Reports Server (NTRS)

    Arvidson, R. E.

    1981-01-01

    Topography and gravity anomaly images for the continental United States were constructed. Evidence was found based on gravity, remote sensing data, the presence, trend, and character of fractures, and on rock type data, for a Precambrian rift through Missouri. The feature is probably the failed arm of a triple junction that existed prior to formation of the granite-rhyolite terrain of southern Missouri.

  20. Efficient iris recognition by characterizing key local variations.

    PubMed

    Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin

    2004-06-01

    Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.

  1. Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeo, U. J.; Supple, J. R.; Franich, R. D.

    2013-10-15

    Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L.more » Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7.5 mm across algorithms for scenarios I to III, respectively. The greatest accuracy was exhibited by the original Horn and Schunck optical flow algorithm. In this case, for scenario III (erased FMs not contributing to driving the DIR calculation), the mean error was half that of the modified demons algorithm (which exhibited the greatest error), across all deformations. Some algorithms failed to reproduce the geometry at all, while others accurately deformed high contrast features but not low-contrast regions—indicating poor interpolation between landmarks.Conclusions: The accuracy of DIR algorithms was quantitatively evaluated using a tissue equivalent, mass, and density conserving DEFGEL phantom. For the model studied, optical flow algorithms performed better than demons algorithms, with the original Horn and Schunck performing best. The degree of error is influenced more by the magnitude of displacement than the geometric complexity of the deformation. As might be expected, deformation is estimated less accurately for low-contrast regions than for high-contrast features, and the method presented here allows quantitative analysis of the differences. The evaluation of registration accuracy through observation of the same high contrast features that drive the DIR calculation is shown to be circular and hence misleading.« less

  2. Can radiomics features be reproducibly measured from CBCT images for patients with non-small cell lung cancer?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fave, Xenia, E-mail: xjfave@mdanderson.org; Fried, David; Mackin, Dennis

    Purpose: Increasing evidence suggests radiomics features extracted from computed tomography (CT) images may be useful in prognostic models for patients with nonsmall cell lung cancer (NSCLC). This study was designed to determine whether such features can be reproducibly obtained from cone-beam CT (CBCT) images taken using medical Linac onboard-imaging systems in order to track them through treatment. Methods: Test-retest CBCT images of ten patients previously enrolled in a clinical trial were retrospectively obtained and used to determine the concordance correlation coefficient (CCC) for 68 different texture features. The volume dependence of each feature was also measured using the Spearman rankmore » correlation coefficient. Features with a high reproducibility (CCC > 0.9) that were not due to volume dependence in the patient test-retest set were further examined for their sensitivity to differences in imaging protocol, level of scatter, and amount of motion by using two phantoms. The first phantom was a texture phantom composed of rectangular cartridges to represent different textures. Features were measured from two cartridges, shredded rubber and dense cork, in this study. The texture phantom was scanned with 19 different CBCT imagers to establish the features’ interscanner variability. The effect of scatter on these features was studied by surrounding the same texture phantom with scattering material (rice and solid water). The effect of respiratory motion on these features was studied using a dynamic-motion thoracic phantom and a specially designed tumor texture insert of the shredded rubber material. The differences between scans acquired with different Linacs and protocols, varying amounts of scatter, and with different levels of motion were compared to the mean intrapatient difference from the test-retest image set. Results: Of the original 68 features, 37 had a CCC >0.9 that was not due to volume dependence. When the Linac manufacturer and imaging protocol were kept consistent, 4–13 of these 37 features passed our criteria for reproducibility more than 50% of the time, depending on the manufacturer-protocol combination. Almost all of the features changed substantially when scatter material was added around the phantom. For the dense cork, 23 features passed in the thoracic scans and 11 features passed in the head scans when the differences between one and two layers of scatter were compared. Using the same test for the shredded rubber, five features passed the thoracic scans and eight features passed the head scans. Motion substantially impacted the reproducibility of the features. With 4 mm of motion, 12 features from the entire volume and 14 features from the center slice measurements were reproducible. With 6–8 mm of motion, three features (Laplacian of Gaussian filtered kurtosis, gray-level nonuniformity, and entropy), from the entire volume and seven features (coarseness, high gray-level run emphasis, gray-level nonuniformity, sum-average, information measure correlation, scaled mean, and entropy) from the center-slice measurements were considered reproducible. Conclusions: Some radiomics features are robust to the noise and poor image quality of CBCT images when the imaging protocol is consistent, relative changes in the features are used, and patients are limited to those with less than 1 cm of motion.« less

  3. A Dynamic Graph Cuts Method with Integrated Multiple Feature Maps for Segmenting Kidneys in 2D Ultrasound Images.

    PubMed

    Zheng, Qiang; Warner, Steven; Tasian, Gregory; Fan, Yong

    2018-02-12

    Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  4. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  6. Helioviewer: A Web 2.0 Tool for Visualizing Heterogeneous Heliophysics Data

    NASA Astrophysics Data System (ADS)

    Hughitt, V. K.; Ireland, J.; Lynch, M. J.; Schmeidel, P.; Dimitoglou, G.; Müeller, D.; Fleck, B.

    2008-12-01

    Solar physics datasets are becoming larger, richer, more numerous and more distributed. Feature/event catalogs (describing objects of interest in the original data) are becoming important tools in navigating these data. In the wake of this increasing influx of data and catalogs there has been a growing need for highly sophisticated tools for accessing and visualizing this wealth of information. Helioviewer is a novel tool for integrating and visualizing disparate sources of solar and Heliophysics data. Taking advantage of the newly available power of modern web application frameworks, Helioviewer merges image and feature catalog data, and provides for Heliophysics data a familiar interface not unlike Google Maps or MapQuest. In addition to streamlining the process of combining heterogeneous Heliophysics datatypes such as full-disk images and coronagraphs, the inclusion of visual representations of automated and human-annotated features provides the user with an integrated and intuitive view of how different factors may be interacting on the Sun. Currently, Helioviewer offers images from The Extreme ultraviolet Imaging Telescope (EIT), The Large Angle and Spectrometric COronagraph experiment (LASCO) and the Michelson Doppler Imager (MDI) instruments onboard The Solar and Heliospheric Observatory (SOHO), as well as The Transition Region and Coronal Explorer (TRACE). Helioviewer also incorporates feature/event information from the LASCO CME List, NOAA Active Regions, CACTus CME and Type II Radio Bursts feature/event catalogs. The project is undergoing continuous development with many more data sources and additional functionality planned for the near future.

  7. Diagnosis of metastatic neoplasms: a clinicopathologic and morphologic approach.

    PubMed

    Marchevsky, Alberto M; Gupta, Ruta; Balzer, Bonnie

    2010-02-01

    The diagnosis of the site of origin of metastatic neoplasms often poses a challenge to practicing pathologists. A variety of immunohistochemical and molecular tests have been proposed for the identification of tumor site of origin, but these methods are no substitute for careful attention to the pathologic features of tumors and their correlation with imaging findings and other clinical data. The current trend in anatomic pathology is to overly rely on immunohistochemical and molecular tests to identify the site of origin of metastatic neoplasms, but this "shotgun approach" is often costly and can result in contradictory and even erroneous conclusions about the site of origin of a metastatic neoplasm. To describe the use of a systematic approach to the evaluation of metastatic neoplasms. Literature review and personal experience. A systematic approach can frequently help to narrow down differential diagnoses for a patient to a few likely tumor sites of origin that can be confirmed or excluded with the use of selected immunohistochemistry and/or molecular tests. This approach involves the qualitative evaluation of the "pretest and posttest probabilities" of various diagnoses before the immunohistochemical and molecular tests are ordered. Pretest probabilities are qualitatively estimated for each individual by taking into consideration the patient's age, sex, clinical history, imaging findings, and location of the metastases. This estimate is further narrowed by qualitatively evaluating, through careful observation of a variety of gross pathology and histopathologic features, the posttest probabilities of the most likely tumor sites of origin. Multiple examples of the use of this systematic approach for the evaluation of metastatic lesions are discussed.

  8. Medusae Fossae

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [figure removed for brevity, see original site] (Released 31 July 2002) This image crosses the equator at about 155 W longitude and shows a sample of the middle member of the Medusae Fossae formation. The layers exposed in the southeast-facing scarp suggest that there is a fairly competent unit underlying the mesa in the center of the image. Dust-avalanches are apparent in the crater depression near the middle of the image. The mesa of Medusae Fossae material has the geomorphic signatures that are typical of the formation elsewhere on Mars, but the surface is probably heavily mantled with fine dust, masking the small-scale character of the unit. The close proximity of the Medusae Fossae unit to the Tharsis region may suggest that it is an ignimbrite or volcanic airfall deposit, but it's eroded character hasn't preserved the primary depositional features that would give away the secrets of formation. One of the most interesting feature in the image is the high-standing knob at the base of the scarp in the lower portion of the image. This knob or butte is high standing because it is composed of material that is not as easily eroded as the rest of the unit. There are a number of possible explanations for this feature, including volcano, inverted crater, or some localized process that caused once friable material to become cemented. Another interesting set of features are the long troughs on the slope in the lower portion of the image. The fact that the features keep the same width for the entire length suggests that these are not simple landslides.

  9. Meroe Patera

    NASA Image and Video Library

    2002-11-26

    This image is located in Meroe Patera (longitude: 292W/68E, latitude: 7.01), which is a small region within Syrtis Major Planitia. Syrtis Major is a low-relief shield volcano whose lava flows make up a plateau more than 1000 km across. These flows are of Hesperian age (Martian activity of intermediate age) and are believed to have originated from a series of volcanic depressions, called calderas. The caldera complex lies on extensions of the ring faults associated with the Isidis impact basin toward the northeast - thus Syrtis Major volcanism may be associated with post-impact adjustments of the Martian crust. The most striking feature in this image is the light streaks across the image that lead to dunes in the lower left region. Wind streaks are albedo markings interpreted to be formed by aeolian action on surface materials. Most are elongate and allow an interpretation of effective wind directions. Many streaks are time variable and thus provide information on seasonal or long-term changes in surface wind directions and strengths. The wind streaks in this image are lighter than their surroundings and are the most common type of wind streak found on Mars. These streaks are formed downwind from crater rims (as in this example), mesas, knobs, and other positive topographic features. The dune field in this image is a mixture of barchan dunes and transverse dunes. Dunes are among the most distinctive aeolian feature on Mars, and are similar in form to barchan and transverse dunes on Earth. This similarity is the best evidence to indicate that martian dunes are composed of sand-sized material, although the source and composition of the sand remain controversial. Both the observations of dunes and wind streaks indicate that this location has a windy environment - and these winds are persistent enough to product dunes, as sand-sized material accumulates in this region. These features also indicate that the winds in this region are originating from the right side of the image, and moving towards the left. http://photojournal.jpl.nasa.gov/catalog/PIA04012

  10. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    PubMed

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  11. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  12. Automatic segmentation of multimodal brain tumor images based on classification of super-voxels.

    PubMed

    Kadkhodaei, M; Samavi, S; Karimi, N; Mohaghegh, H; Soroushmehr, S M R; Ward, K; All, A; Najarian, K

    2016-08-01

    Despite the rapid growth in brain tumor segmentation approaches, there are still many challenges in this field. Automatic segmentation of brain images has a critical role in decreasing the burden of manual labeling and increasing robustness of brain tumor diagnosis. We consider segmentation of glioma tumors, which have a wide variation in size, shape and appearance properties. In this paper images are enhanced and normalized to same scale in a preprocessing step. The enhanced images are then segmented based on their intensities using 3D super-voxels. Usually in images a tumor region can be regarded as a salient object. Inspired by this observation, we propose a new feature which uses a saliency detection algorithm. An edge-aware filtering technique is employed to align edges of the original image to the saliency map which enhances the boundaries of the tumor. Then, for classification of tumors in brain images, a set of robust texture features are extracted from super-voxels. Experimental results indicate that our proposed method outperforms a comparable state-of-the-art algorithm in term of dice score.

  13. Assigning Main Orientation to an EOH Descriptor on Multispectral Images.

    PubMed

    Li, Yong; Shi, Xiang; Wei, Lijun; Zou, Junwei; Chen, Fang

    2015-07-01

    This paper proposes an approach to compute an EOH (edge-oriented histogram) descriptor with main orientation. EOH has a better matching ability than SIFT (scale-invariant feature transform) on multispectral images, but does not assign a main orientation to keypoints. Alternatively, it tends to assign the same main orientation to every keypoint, e.g., zero degrees. This limits EOH to matching keypoints between images of translation misalignment only. Observing this limitation, we propose assigning to keypoints the main orientation that is computed with PIIFD (partial intensity invariant feature descriptor). In the proposed method, SIFT keypoints are detected from images as the extrema of difference of Gaussians, and every keypoint is assigned to the main orientation computed with PIIFD. Then, EOH is computed for every keypoint with respect to its main orientation. In addition, an implementation variant is proposed for fast computation of the EOH descriptor. Experimental results show that the proposed approach performs more robustly than the original EOH on image pairs that have a rotation misalignment.

  14. An Overview of data science uses in bioimage informatics.

    PubMed

    Chessel, Anatole

    2017-02-15

    This review aims at providing a practical overview of the use of statistical features and associated data science methods in bioimage informatics. To achieve a quantitative link between images and biological concepts, one typically replaces an object coming from an image (a segmented cell or intracellular object, a pattern of expression or localisation, even a whole image) by a vector of numbers. They range from carefully crafted biologically relevant measurements to features learnt through deep neural networks. This replacement allows for the use of practical algorithms for visualisation, comparison and inference, such as the ones from machine learning or multivariate statistics. While originating mainly, for biology, in high content screening, those methods are integral to the use of data science for the quantitative analysis of microscopy images to gain biological insight, and they are sure to gather more interest as the need to make sense of the increasing amount of acquired imaging data grows more pressing. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  16. High-resolution CISS MR imaging with and without contrast for evaluation of the upper cranial nerves: segmental anatomy and selected pathologic conditions of the cisternal through extraforaminal segments.

    PubMed

    Blitz, Ari M; Macedo, Leonardo L; Chonka, Zachary D; Ilica, Ahmet T; Choudhri, Asim F; Gallia, Gary L; Aygun, Nafi

    2014-02-01

    The authors review the course and appearance of the major segments of the upper cranial nerves from their apparent origin at the brainstem through the proximal extraforaminal region, focusing on the imaging and anatomic features of particular relevance to high-resolution magnetic resonance imaging evaluation. Selected pathologic entities are included in the discussion of the corresponding cranial nerve segments for illustrative purposes. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Imaging experiment: The Viking Lander

    USGS Publications Warehouse

    Mutch, T.A.; Binder, A.B.; Huck, F.O.; Levinthal, E.C.; Morris, E.C.; Sagan, C.; Young, A.T.

    1972-01-01

    The Viking Lander Imaging System will consist of two identical facsimile cameras. Each camera has a high-resolution mode with an instantaneous field of view of 0.04??, and survey and color modes with instantaneous fields of view of 0.12??. Cameras are positioned one meter apart to provide stereoscopic coverage of the near-field. The Imaging Experiment will provide important information about the morphology, composition, and origin of the Martian surface and atmospheric features. In addition, lander pictures will provide supporting information for other experiments in biology, organic chemistry, meteorology, and physical properties. ?? 1972.

  18. Image Description with Local Patterns: An Application to Face Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro

    In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.

  19. Computer-aided diagnostic method for classification of Alzheimer's disease with atrophic image features on MR images

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Yoshiura, Takashi; Kumazawa, Seiji; Tanaka, Kazuhiro; Koga, Hiroshi; Mihara, Futoshi; Honda, Hiroshi; Sakai, Shuji; Toyofuku, Fukai; Higashida, Yoshiharu

    2008-03-01

    Our goal for this study was to attempt to develop a computer-aided diagnostic (CAD) method for classification of Alzheimer's disease (AD) with atrophic image features derived from specific anatomical regions in three-dimensional (3-D) T1-weighted magnetic resonance (MR) images. Specific regions related to the cerebral atrophy of AD were white matter and gray matter regions, and CSF regions in this study. Cerebral cortical gray matter regions were determined by extracting a brain and white matter regions based on a level set based method, whose speed function depended on gradient vectors in an original image and pixel values in grown regions. The CSF regions in cerebral sulci and lateral ventricles were extracted by wrapping the brain tightly with a zero level set determined from a level set function. Volumes of the specific regions and the cortical thickness were determined as atrophic image features. Average cortical thickness was calculated in 32 subregions, which were obtained by dividing each brain region. Finally, AD patients were classified by using a support vector machine, which was trained by the image features of AD and non-AD cases. We applied our CAD method to MR images of whole brains obtained from 29 clinically diagnosed AD cases and 25 non-AD cases. As a result, the area under a receiver operating characteristic (ROC) curve obtained by our computerized method was 0.901 based on a leave-one-out test in identification of AD cases among 54 cases including 8 AD patients at early stages. The accuracy for discrimination between 29 AD patients and 25 non-AD subjects was 0.840, which was determined at the point where the sensitivity was the same as the specificity on the ROC curve. This result showed that our CAD method based on atrophic image features may be promising for detecting AD patients by using 3-D MR images.

  20. Tharsis Limb Cloud

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Annotated image of Tharsis Limb Cloud

    7 September 2005 This composite of red and blue Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) daily global images acquired on 6 July 2005 shows an isolated water ice cloud extending more than 30 kilometers (more than 18 miles) above the martian surface. Clouds such as this are common in late spring over the terrain located southwest of the Arsia Mons volcano. Arsia Mons is the dark, oval feature near the limb, just to the left of the 'T' in the 'Tharsis Montes' label. The dark, nearly circular feature above the 'S' in 'Tharsis' is the volcano, Pavonis Mons, and the other dark circular feature, above and to the right of 's' in 'Montes,' is Ascraeus Mons. Illumination is from the left/lower left.

    Season: Northern Autumn/Southern Spring

  1. Global Interior Robot Localisation by a Colour Content Image Retrieval System

    NASA Astrophysics Data System (ADS)

    Chaari, A.; Lelandais, S.; Montagne, C.; Ahmed, M. Ben

    2007-12-01

    We propose a new global localisation approach to determine a coarse position of a mobile robot in structured indoor space using colour-based image retrieval techniques. We use an original method of colour quantisation based on the baker's transformation to extract a two-dimensional colour pallet combining as well space and vicinity-related information as colourimetric aspect of the original image. We conceive several retrieving approaches bringing to a specific similarity measure [InlineEquation not available: see fulltext.] integrating the space organisation of colours in the pallet. The baker's transformation provides a quantisation of the image into a space where colours that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image. Whereas the distance [InlineEquation not available: see fulltext.] provides for partial invariance to translation, sight point small changes, and scale factor. In addition to this study, we developed a hierarchical search module based on the logic classification of images following rooms. This hierarchical module reduces the searching indoor space and ensures an improvement of our system performances. Results are then compared with those brought by colour histograms provided with several similarity measures. In this paper, we focus on colour-based features to describe indoor images. A finalised system must obviously integrate other type of signature like shape and texture.

  2. Fusion of Geophysical Images in the Study of Archaeological Sites

    NASA Astrophysics Data System (ADS)

    Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.

    2011-12-01

    This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.

  3. Intraventricular meningiomas: a clinicopathological study and review.

    PubMed

    Bhatoe, Harjinder S; Singh, Prakash; Dutta, Vibha

    2006-03-15

    Intraventricular meningiomas are rare tumors. The origin of these tumors can be traced to embryological invagination of arachnoid cells into the choroid plexus. The authors analyzed data that they had collected to study the clinicopathological aspects and review the origin, presentation, imaging, and management of these tumors. In this retrospective analysis, the authors describe the cases of 12 patients who had received a diagnosis of intraventricular meningioma and underwent surgery for the tumors. Nine of these patients were men and three were women. Features of neurofibromatosis Type 2 were present in two of the women. Nine of the tumors were located in the lateral ventricles, one was in the third ventricle, and two were in the fourth ventricle. Raised intracranial pressure (ICP) was the universal presentation in all the patients, and the preoperative diagnoses were confirmed on neuroimaging studies. Excision was performed using the parietooccipital (trigonal) approach for lateral ventricle tumors, the transcortical-transventricular route for the third ventricle tumor, and suboccipital craniectomy for fourth ventricle tumors. Postoperatively, one patient died and the others experienced resolution of their symptoms. Histopathological features of these tumors were similar to those seen in meningiomas in other locations. Intraventricular meningiomas are slow-growing tumors that become large prior to detection. Although they are commonly seen in the lateral ventricles, they occur in the third and fourth ventricles as well. Presentation is in the form of raised ICP with no localizing features; therefore the diagnosis is based on imaging studies. Hydrocephalus occurs due to obstruction of cerebrospinal fluid pathways. Excision requires planning to avoid eloquent cortex incision. The histopathological features are varied, although most of the tumors in the study were angiomatous meningiomas. These tumors are no different histologically from tumors that are dural in origin. No recurrence has been reported.

  4. A general prediction model for the detection of ADHD and Autism using structural and functional MRI.

    PubMed

    Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G

    2018-01-01

    This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.

  5. Biometrics encryption combining palmprint with two-layer error correction codes

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  6. Influence of skin ageing features on Chinese women's perception of facial age and attractiveness.

    PubMed

    Porcheron, A; Latreille, J; Jdid, R; Tschachler, E; Morizot, F

    2014-08-01

    Ageing leads to characteristic changes in the appearance of facial skin. Among these changes, we can distinguish the skin topographic cues (skin sagging and wrinkles), the dark spots and the dark circles around the eyes. Although skin changes are similar in Caucasian and Chinese faces, the age of occurrence and the severity of age-related features differ between the two populations. Little is known about how the ageing of skin influences the perception of female faces in Chinese women. The aim of this study is to evaluate the contribution of the different age-related skin features to the perception of age and attractiveness in Chinese women. Facial images of Caucasian women and Chinese women in their 60s were manipulated separately to reduce the following skin features: (i) skin sagging and wrinkles, (ii) dark spots and (iii) dark circles. Finally, all signs were reduced simultaneously (iv). Female Chinese participants were asked to estimate the age difference between the modified and original images and evaluate the attractiveness of modified and original faces. Chinese women perceived the Chinese faces as younger after the manipulation of dark spots than after the reduction in wrinkles/sagging, whereas they perceived the Caucasian faces as the youngest after the manipulation of wrinkles/sagging. Interestingly, Chinese women evaluated faces with reduced dark spots as being the most attractive whatever the origin of the face. The manipulation of dark circles contributed to making Caucasian and Chinese faces being perceived younger and more attractive than the original faces, although the effect was less pronounced than for the two other types of manipulation. This is the first study to have examined the influence of various age-related skin features on the facial age and attractiveness perception of Chinese women. The results highlight different contributions of dark spots, sagging/wrinkles and dark circles to their perception of Chinese and Caucasian faces. © 2014 The Authors. International Journal of Cosmetic Science published by John Wiley & Sons Ltd on behalf of Society of Cosmetic Scientists and Societe Francaise de Cosmetologie.

  7. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.

  8. Anticounterfeiting features of artistic screening

    NASA Astrophysics Data System (ADS)

    Ostromoukhov, Victor; Rudaz, Nicolas; Amidror, Isaac; Emmel, Patrick; Hersch, Roger D.

    1996-12-01

    In a recent publication (Ostromoukhov95), a new image reproduction technique, artistic screening, was presented. It incorporates freely created artistic screen elements for generating halftones. Fixed predefined dot contours associated with given intensity levels determine the screen dot shape's growing behavior. Screen dot contours associated with each intensity level are obtained by interpolation between the fixed predefined dot contours. A user-defined mapping transforms screen elements from screen element definition space to screen element rendition space. This mapping can be tuned to produce various effects such as dilatations, contractions and non-linear deformations of the screen element grid. Although artistic screening has been designed mainly for performing the creation of graphic designs of high artistic quality, it also incorporates several important anti-counterfeiting features. For example, bank notes or other valuable printed matters produced with artistic screening may incorporate both full size and microscopic letters of varying shape into the image halftoning process. Furthermore, artistic screening can be used for generating screen dots at varying frequencies and orientations, which are well known for inducing strong moire effects when scanned by a digital color copier or a desktop scanner. However, it is less known that frequency-modulated screen dots have at each screen element size a different reproduction behavior (dot gain). When trying to reproduce an original by analog means, such as a photocopier, the variations in dot gain induce strong intensity variations at the same original intensity levels. In this paper, we present a method for compensating such variations for the target printer, on which the original security document is to be printed. Potential counterfeiters who would like to reproduce the original with a photocopying device may only be able to adjust the dot gain for the whole image and will therefore be unable to eliminate the undesired intensity variations produced by variable frequency screen elements.

  9. The effect of defect cluster size and interpolation on radiographic image quality

    NASA Astrophysics Data System (ADS)

    Töpfer, Karin; Yip, Kwok L.

    2011-03-01

    For digital X-ray detectors, the need to control factory yield and cost invariably leads to the presence of some defective pixels. Recently, a standard procedure was developed to identify such pixels for industrial applications. However, no quality standards exist in medical or industrial imaging regarding the maximum allowable number and size of detector defects. While the answer may be application specific, the minimum requirement for any defect specification is that the diagnostic quality of the images be maintained. A more stringent criterion is to keep any changes in the images due to defects below the visual threshold. Two highly sensitive image simulation and evaluation methods were employed to specify the fraction of allowable defects as a function of defect cluster size in general radiography. First, the most critical situation of the defect being located in the center of the disease feature was explored using image simulation tools and a previously verified human observer model, incorporating a channelized Hotelling observer. Detectability index d' was obtained as a function of defect cluster size for three different disease features on clinical lung and extremity backgrounds. Second, four concentrations of defects of four different sizes were added to clinical images with subtle disease features and then interpolated. Twenty observers evaluated the images against the original on a single display using a 2-AFC method, which was highly sensitive to small changes in image detail. Based on a 50% just-noticeable difference, the fraction of allowed defects was specified vs. cluster size.

  10. Visible and Near-Infrared Spectroscopy of Hephaestus Fossae Cratered Cones, Mars

    NASA Astrophysics Data System (ADS)

    Dapremont, A.; Wray, J. J.

    2017-12-01

    Hephaestus Fossae are a system of sub-parallel fractures on Mars (> 500 km long) interpreted as near-surface tensional cracks [1]. Images of the Martian surface from the High Resolution Imaging Science Experiment have revealed cratered cones within the Hephaestus Fossae region. A volcanic origin (cinder/tuff cones) has been proposed for these features based on morphometric measurements and fine-scale surface characteristics [2]. In an effort to further constrain the origin of these cones as the products of igneous or sedimentary volcanism, we use data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). We take advantage of CRISM's S (0.4 - 1.0 microns) and L (1.0 - 3.9 microns) detector wavelength ranges to investigate the presence or absence of spectral signatures consistent with previous identifications of igneous and mud volcanism products on Mars [3,4]. Hephaestus Fossae cratered cone rims exhibit a consistent nanophase ferric oxide signature. We also identify ferrous phases and 3-micron absorptions (attributed to fundamental vibrational stretch frequencies in H2O) on the crater rims of several cones. Mafic signatures on cratered cone rims support an igneous provenance for these features. The 3-micron absorptions are consistent with the presence of structurally bound or adsorbed water. Our CRISM observations are similar to those of small edifice features in Chryse Planitia, which were interpreted as mud volcanism products based on their enrichment of nanophase ferric minerals and 3-micron absorptions on summit crater rims [3]. Hydrothermal activity was invoked for a Coprates Chasma pitted cone (scoria/tuff cone) based on CRISM identification of partially dehydrated opaline silica, which we do not observe in Hephaestus Fossae [4]. Our spectral observations are more consistent with mud volcanism, but we do not definitively rule out an igneous volcanic origin for the cones in our study region. We demonstrate that VNIR spectroscopy is a valuable tool in developing criteria to determine the origin (igneous/sedimentary/periglacial) of cone features on Mars. [1] Skinner and Tanaka (2007) Icarus 186: 41-59. [2] Dundas et al (2007) LPSC XXXVIII Abs #2116. [3] Komatsu et al (2016) Icarus 268: 56-75. [4] Brož et al (2017) Earth and Planetary Sci Letters 473: 122-130.

  11. Engineering Bioluminescent Proteins: Expanding their Analytical Potential

    PubMed Central

    Rowe, Laura; Dikici, Emre; Daunert, Sylvia

    2009-01-01

    Synopsis Bioluminescence has been observed in nature since the dawn of time, but now, scientists are harnessing it for analytical applications. Laura Rowe, Emre Dikici, and Sylvia Daunert of the University of Kentucky describe the origins of bioluminescent proteins and explore their uses in the modern chemistry laboratory. The cover features spectra of bioluminescent light superimposed on an image of jellyfish, which are a common source of bioluminescent proteins. Images courtesy of Emre Dikici and Shutterstock. PMID:19725502

  12. Online Citizen Science with Clickworkers & MRO HiRISE E/PO

    NASA Astrophysics Data System (ADS)

    Gulick, V. C.; Deardorff, G.; Kanefsky, B.; HiRISE Science Team

    2010-12-01

    The High-Resolution Imaging Science Experiment’s E/PO has fielded several online citizen science projects. Our efforts are guided by HiRISE E/PO’s philosophy of providing innovative opportunities for students and the public to participate in the scientific discovery process. HiRISE Clickworkers, a follow-on to the original Clickworkers crater identification and size diameter marking website, provides an opportunity for the public to identify & mark over a dozen landform feature types in HiRISE images, including dunes, gullies, patterned ground, wind streaks, boulders, craters, layering, volcanoes, etc. In HiRISE Clickworkers, the contributor views several sample images showing variations of different landforms, and simply marks all the landform types they could spot while looking at a small portion of a HiRISE image. Contributors then submit their work & once validated by comparison to the output of other participants, results are then added to geologic feature databases. Scientists & others will eventually be able to query these databases for locations of particular geologic features in the HiRISE images. Participants can also mark other features that they find intriguing for the HiRISE camera to target. The original Clickworkers website pilot study ran from November 2000 until September 2001 (Kanefsky et al., 2001, LPSC XXXII). It was among the first online Citizen Science efforts for planetary science. In its pilot study, we endeavored to answer two questions: 1) Was the public willing & able to help science, & 2) Can the public produce scientifically useful results? Since its inception over 3,500,000 craters have been identified, & over 350,000 of these craters have been classified. Over 2 million of these craters were marked on Viking Orbiter image mosaics, nearly 800,000 craters were marked on Mars Orbiter Camera (MOC) images. Note that these are not counts of distinct craters. For example, each crater in the Viking orbiter images was counted by about 50 contributors. In HiRISE Clickworkers, over a dozen different geologic features have been identified on over 57,000 image tiles. A key objective of Clickworkers has been to break up work into manageable chunks so that people can contribute a few minutes at a time and their work all adds up. Our HiRISE Student Image Challenges (http://quest.nasa.gov), another online citizen science project, provide educators and students a virtual science team experience. Registered participants are given access to HiWeb, the HiRISE team’s image suggestion facility to submit their image suggestions. Once the images are returned, students browse, pan and zoom through their acquired images online and at full resolution before they are released to the public (http://marsoweb.nas.nasa.gov/HiRISE/quest/). Students fill out a report form summarizing the key results of their image analysis and with the help of a HiRISE team member write a caption for their image. The image is posted on the HiRISE image release website along with the caption, with credit to the suggesting class and school. HiWish, HiRISE’s public image suggestion website (see McEwen et al., this mtg.), provides a simpler interface for the public at large to submit HiRISE images. HiWish also provides a list of recently submitted image requests.

  13. Using cellular automata to generate image representation for biological sequences.

    PubMed

    Xiao, X; Shao, S; Ding, Y; Huang, Z; Chen, X; Chou, K-C

    2005-02-01

    A novel approach to visualize biological sequences is developed based on cellular automata (Wolfram, S. Nature 1984, 311, 419-424), a set of discrete dynamical systems in which space and time are discrete. By transforming the symbolic sequence codes into the digital codes, and using some optimal space-time evolvement rules of cellular automata, a biological sequence can be represented by a unique image, the so-called cellular automata image. Many important features, which are originally hidden in a long and complicated biological sequence, can be clearly revealed thru its cellular automata image. With biological sequences entering into databanks rapidly increasing in the post-genomic era, it is anticipated that the cellular automata image will become a very useful vehicle for investigation into their key features, identification of their function, as well as revelation of their "fingerprint". It is anticipated that by using the concept of the pseudo amino acid composition (Chou, K.C. Proteins: Structure, Function, and Genetics, 2001, 43, 246-255), the cellular automata image approach can also be used to improve the quality of predicting protein attributes, such as structural class and subcellular location.

  14. Surveying the Newly Digitized Apollo Metric Images for Highland Fault Scarps on the Moon

    NASA Astrophysics Data System (ADS)

    Williams, N. R.; Pritchard, M. E.; Bell, J. F.; Watters, T. R.; Robinson, M. S.; Lawrence, S.

    2009-12-01

    The presence and distribution of thrust faults on the Moon have major implications for lunar formation and thermal evolution. For example, thermal history models for the Moon imply that most of the lunar interior was initially hot. As the Moon cooled over time, some models predict global-scale thrust faults should form as stress builds from global thermal contraction. Large-scale thrust fault scarps with lengths of hundreds of kilometers and maximum relief of up to a kilometer or more, like those on Mercury, are not found on the Moon; however, relatively small-scale linear and curvilinear lobate scarps with maximum lengths typically around 10 km have been observed in the highlands [Binder and Gunga, Icarus, v63, 1985]. These small-scale scarps are interpreted to be thrust faults formed by contractional stresses with relatively small maximum (tens of meters) displacements on the faults. These narrow, low relief landforms could only be identified in the highest resolution Lunar Orbiter and Apollo Panoramic Camera images and under the most favorable lighting conditions. To date, the global distribution and other properties of lunar lobate faults are not well understood. The recent micron-resolution scanning and digitization of the Apollo Mapping Camera (Metric) photographic negatives [Lawrence et al., NLSI Conf. #1415, 2008; http://wms.lroc.asu.edu/apollo] provides a new dataset to search for potential scarps. We examined more than 100 digitized Metric Camera image scans, and from these identified 81 images with favorable lighting (incidence angles between about 55 and 80 deg.) to manually search for features that could be potential tectonic scarps. Previous surveys based on Panoramic Camera and Lunar Orbiter images found fewer than 100 lobate scarps in the highlands; in our Apollo Metric Camera image survey, we have found additional regions with one or more previously unidentified linear and curvilinear features on the lunar surface that may represent lobate thrust fault scarps. In this presentation we review the geologic characteristics and context of these newly-identified, potentially tectonic landforms. The lengths and relief of some of these linear and curvilinear features are consistent with previously identified lobate scarps. Most of these features are in the highlands, though a few occur along the edges of mare and/or crater ejecta deposits. In many cases the resolution of the Metric Camera frames (~10 m/pix) is not adequate to unequivocally determine the origin of these features. Thus, to assess if the newly identified features have tectonic or other origins, we are examining them in higher-resolution Panoramic Camera (currently being scanned) and Lunar Reconnaissance Orbiter Camera Narrow Angle Camera images [Watters et al., this meeting, 2009].

  15. The effects of alcohol intoxication on attention and memory for visual scenes.

    PubMed

    Harvey, Alistair J; Kneller, Wendy; Campbell, Alison C

    2013-01-01

    This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes.

  16. A new technique for solving puzzles.

    PubMed

    Makridis, Michael; Papamarkos, Nikos

    2010-06-01

    This paper proposes a new technique for solving jigsaw puzzles. The novelty of the proposed technique is that it provides an automatic jigsaw puzzle solution without any initial restriction about the shape of pieces, the number of neighbor pieces, etc. The proposed technique uses both curve- and color-matching similarity features. A recurrent procedure is applied, which compares and merges puzzle pieces in pairs, until the original puzzle image is reformed. Geometrical and color features are extracted on the characteristic points (CPs) of the puzzle pieces. CPs, which can be considered as high curvature points, are detected by a rotationally invariant corner detection algorithm. The features which are associated with color are provided by applying a color reduction technique using the Kohonen self-organized feature map. Finally, a postprocessing stage checks and corrects the relative position between puzzle pieces to improve the quality of the resulting image. Experimental results prove the efficiency of the proposed technique, which can be further extended to deal with even more complex jigsaw puzzle problems.

  17. An iterative shrinkage approach to total-variation image restoration.

    PubMed

    Michailovich, Oleg V

    2011-05-01

    The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a situation, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information--commonly referred to as simply priors--is essential for image restoration, rendering it stable and robust to noise. Moreover, using the priors makes the recovered images exhibit some plausible features of their original counterpart. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In this present paper, a different approach to the solution of the problem is proposed based upon the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae. Finally, a number of standard examples of image deblurring are demonstrated, in which the proposed method can provide restoration results of superior quality as compared to the case of sparse-wavelet deconvolution.

  18. All About That Basin

    NASA Image and Video Library

    2015-02-25

    This mosaic of Caloris basin is an enhanced-color composite overlain on a monochrome mosaic featured in a previous post. The color mosaic is made up of WAC images obtained when both the spacecraft and the Sun were overhead, conditions best for discerning variations in albedo, or brightness. The monochrome mosaic is made up of WAC and NAC images obtained at off-vertical Sun angles (i.e., high incidence angles) and with visible shadows so as to reveal clearly the topographic form of geologic features. The combination of the two datasets allows the correlation of geologic features with their color properties. In portions of the scene, color differences from image to image are apparent. Ongoing calibration efforts by the MESSENGER team strive to minimize these differences. Caloris basin has been flooded by lavas that appear orange in this mosaic. Post-flooding craters have excavated material from beneath the surface. The larger of these craters have exposed low-reflectance material (blue in this mosaic) from beneath the surface lavas, likely giving a glimpse of the original basin floor material. Analysis of these craters yields an estimate of the thickness of the volcanic layer: 2.5-3.5 km (1.6-2.2 mi.). http://photojournal.jpl.nasa.gov/catalog/PIA19216

  19. TU-CD-BRB-12: Radiogenomics of MRI-Guided Prostate Cancer Biopsy Habitats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoyanova, R; Lynne, C; Abraham, S

    2015-06-15

    Purpose: Diagnostic prostate biopsies are subject to sampling bias. We hypothesize that quantitative imaging with multiparametric (MP)-MRI can more accurately direct targeted biopsies to index lesions associated with highest risk clinical and genomic features. Methods: Regionally distinct prostate habitats were delineated on MP-MRI (T2-weighted, perfusion and diffusion imaging). Directed biopsies were performed on 17 habitats from 6 patients using MRI-ultrasound fusion. Biopsy location was characterized with 52 radiographic features. Transcriptome-wide analysis of 1.4 million RNA probes was performed on RNA from each habitat. Genomics features with insignificant expression values (<0.25) and interquartile range <0.5 were filtered, leaving total of 212more » genes. Correlation between imaging features, genes and a 22 feature genomic classifier (GC), developed as a prognostic assay for metastasis after radical prostatectomy was investigated. Results: High quality genomic data was derived from 17 (100%) biopsies. Using the 212 ‘unbiased’ genes, the samples clustered by patient origin in unsupervised analysis. When only prostate cancer related genomic features were used, hierarchical clustering revealed samples clustered by needle-biopsy Gleason score (GS). Similarly, principal component analysis of the imaging features, found the primary source of variance segregated the samples into high (≥7) and low (6) GS. Pearson’s correlation analysis of genes with significant expression showed two main patterns of gene expression clustering prostate peripheral and transitional zone MRI features. Two-way hierarchical clustering of GC with radiomics features resulted in the expected groupings of high and low expressed genes in this metastasis signature. Conclusions: MP-MRI-targeted diagnostic biopsies can potentially improve risk stratification by directing pathological and genomic analysis to clinically significant index lesions. As determinant lesions are more reliably identified, targeting with radiotherapy should improve outcome. This is the first demonstration of a link between quantitative imaging features (radiomics) with genomic features in MRI-directed prostate biopsies. The research was supported by NIH- NCI R01 CA 189295 and R01 CA 189295; E Davicioni is partial owner of GenomeDx Biosciences, Inc. M Takhar, N Erho, L Lam, C Buerki and E Davicioni are current employees at GenomeDx Biosciences, Inc.« less

  20. Sub-pattern based multi-manifold discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  1. Applications of iQID cameras

    NASA Astrophysics Data System (ADS)

    Han, Ling; Miller, Brian W.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2017-09-01

    iQID is an intensified quantum imaging detector developed in the Center for Gamma-Ray Imaging (CGRI). Originally called BazookaSPECT, iQID was designed for high-resolution gamma-ray imaging and preclinical gamma-ray single-photon emission computed tomography (SPECT). With the use of a columnar scintillator, an image intensifier and modern CCD/CMOS sensors, iQID cameras features outstanding intrinsic spatial resolution. In recent years, many advances have been achieved that greatly boost the performance of iQID, broadening its applications to cover nuclear and particle imaging for preclinical, clinical and homeland security settings. This paper presents an overview of the recent advances of iQID technology and its applications in preclinical and clinical scintigraphy, preclinical SPECT, particle imaging (alpha, neutron, beta, and fission fragment), and digital autoradiography.

  2. Stick-Shape, Rice-Size Features on Martian Rock "Haroldswick"

    NASA Image and Video Library

    2018-02-08

    The dark, stick-shaped features clustered on this Martian rock are about the size of grains of rice. This is a focus-merged view from the Mars Hand Lens Imager (MAHLI) camera on NASA's Curiosity Mars rover. It covers an area about 2 inches (5 centimeters) across. The focus-merged product was generated autonomously by MAHLI combining the in-focus portions of a few separate images taken at different focus settings on Jan. 1, 2018, during the 1,922nd Martian day, or sol, of Curiosity's work on Mars. This rock target, called "Haroldswick," is near the southern, uphill edge of "Vera Rubin Ridge" on lower Mount Sharp. The origin of the stick-shaped features is uncertain. One possibility is that they are erosion-resistant bits of dark material from mineral veins cutting through rocks in this area. https://photojournal.jpl.nasa.gov/catalog/PIA22213

  3. RecceMan: an interactive recognition assistance for image-based reconnaissance: synergistic effects of human perception and computational methods for object recognition, identification, and infrastructure analysis

    NASA Astrophysics Data System (ADS)

    El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno

    2015-10-01

    This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The infrastructure analysis mode pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system offers him the possibility to assign the image signatures to features given by sample images. The underlying data set contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible solution set of objects is automatically reduced and matches only the objects that contain the selected features. Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly automated methods to extract image signatures and assign them to the corresponding features. This research includes methods for automatically determining the orientation of an object and geometric features like width and length of the object. This step enables to reduce automatically the possible object types offered to the image analyst by the interactive recognition assistance system.

  4. Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.

    2016-06-01

    Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability of putative matches and the utilisation of templates instead of feature descriptors. In our experiments discussed in this paper, typical urban scenes have been used for evaluating the proposed method. Even though no additional outlier removal techniques have been used, our method yields almost 90% of correct correspondences. However, repetitive image patterns may still induce ambiguities which cannot be fully averted by this technique. Hence and besides, possible advancements will be briefly presented.

  5. Probability mapping of scarred myocardium using texture and intensity features in CMR images

    PubMed Central

    2013-01-01

    Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280

  6. Unusual scarring patterns on cardiac magnetic resonance imaging: A potentially treatable etiology not to be missed.

    PubMed

    Jordan, Andrew; Lyne, Jonathan; Wong, Tom

    2010-04-01

    A case of cardiomyopathy and ventricular tachycardia previously assumed to be idiopathic in origin is described. Investigation with cardiac magnetic resonance imaging prompted the diagnosis and successful treatment of an underlying disorder based on typical scarring patterns seen with late gadolinium enhancement. The present report suggests that clinicians should have a low threshold for actively excluding this condition in patients presenting with cardiomyopathy, even in the absence of other disease features, particularly if typical scarring patterns are found on cardiac magnetic resonance imaging because disease-specific therapy appears to significantly improve both symptoms and prognosis.

  7. Bas-relief generation using adaptive histogram equalization.

    PubMed

    Sun, Xianfang; Rosin, Paul L; Martin, Ralph R; Langbein, Frank C

    2009-01-01

    An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogden, K; O’Dwyer, R; Bradford, T

    Purpose: To reduce differences in features calculated from MRI brain scans acquired at different field strengths with or without Gadolinium contrast. Methods: Brain scans were processed for 111 epilepsy patients to extract hippocampus and thalamus features. Scans were acquired on 1.5 T scanners with Gadolinium contrast (group A), 1.5T scanners without Gd (group B), and 3.0 T scanners without Gd (group C). A total of 72 features were extracted. Features were extracted from original scans and from scans where the image pixel values were rescaled to the mean of the hippocampi and thalami values. For each data set, cluster analysismore » was performed on the raw feature set and for feature sets with normalization (conversion to Z scores). Two methods of normalization were used: The first was over all values of a given feature, and the second by normalizing within the patient group membership. The clustering software was configured to produce 3 clusters. Group fractions in each cluster were calculated. Results: For features calculated from both the non-rescaled and rescaled data, cluster membership was identical for both the non-normalized and normalized data sets. Cluster 1 was comprised entirely of Group A data, Cluster 2 contained data from all three groups, and Cluster 3 contained data from only groups 1 and 2. For the categorically normalized data sets there was a more uniform distribution of group data in the three Clusters. A less pronounced effect was seen in the rescaled image data features. Conclusion: Image Rescaling and feature renormalization can have a significant effect on the results of clustering analysis. These effects are also likely to influence the results of supervised machine learning algorithms. It may be possible to partly remove the influence of scanner field strength and the presence of Gadolinium based contrast in feature extraction for radiomics applications.« less

  9. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  10. Imaging of cerebellopontine angle lesions: an update. Part 1: enhancing extra-axial lesions.

    PubMed

    Bonneville, Fabrice; Savatovsky, Julien; Chiras, Jacques

    2007-10-01

    Computed tomography (CT) and magnetic resonance (MR) imaging reliably demonstrate typical features of vestibular schwannomas or meningiomas in the vast majority of mass lesions in the cerebellopontine angle (CPA). However, a large variety of unusual lesions can also be encountered in the CPA. Covering the entire spectrum of lesions potentially found in the CPA, these articles explain the pertinent neuroimaging features that radiologists need to know to make clinically relevant diagnoses in these cases, including data from diffusion and perfusion-weighted imaging or MR spectroscopy, when available. A diagnostic algorithm based on the lesion's site of origin, shape and margins, density, signal intensity and contrast material uptake is also proposed. Part 1 describes the different enhancing extra-axial CPA masses primarily arising from the cerebellopontine cistern and its contents, including vestibular and non-vestibular schwannomas, meningioma, metastasis, aneurysm, tuberculosis and other miscellaneous meningeal lesions.

  11. Nonpuerperal mastitis and subareolar abscess of the breast.

    PubMed

    Kasales, Claudia J; Han, Bing; Smith, J Stanley; Chetlen, Alison L; Kaneda, Heather J; Shereef, Serene

    2014-02-01

    The purpose of this article is to show radiologists how to readily recognize nonpuerperal subareolar abscess and its complications in order to help reduce the time to definitive therapy and improve patient care. To achieve this purpose, the various theories of pathogenesis and the associated histopathologic features are reviewed; the typical clinical characteristics are detailed in contrast to those seen in lactational abscess and inflammatory breast cancer; the common imaging findings are described with emphasis on the sonographic features; correlative pathologic findings are presented to reinforce the imaging findings as they pertain to disease origins; and the various treatment options are reviewed. Nonpuerperal subareolar mastitis and abscess is a benign breast entity often associated with prolonged morbidity. Through better understanding of the underlying disease process the imaging, physical, and clinical findings of this rare process can be more readily recognized and treatment options expedited, improving patient care.

  12. Study on Remote Sensing Image Characteristics of Ecological Land: Case Study of Original Ecological Land in the Yellow River Delta

    NASA Astrophysics Data System (ADS)

    An, G. Q.

    2018-04-01

    Takes the Yellow River Delta as an example, this paper studies the characteristics of remote sensing imagery with dominant ecological functional land use types, compares the advantages and disadvantages of different image in interpreting ecological land use, and uses research results to analyse the changing trend of ecological land in the study area in the past 30 years. The main methods include multi-period, different sensor images and different seasonal spectral curves, vegetation index, GIS and data analysis methods. The results show that the main ecological land in the Yellow River Delta included coastal beaches, saline-alkaline lands, and water bodies. These lands have relatively distinct spectral and texture features. The spectral features along the beach show characteristics of absorption in the green band and reflection in the red band. This feature is less affected by the acquisition year, season, and sensor type. Saline-alkali land due to the influence of some saline-alkaline-tolerant plants such as alkali tent, Tamarix and other vegetation, the spectral characteristics have a certain seasonal changes, winter and spring NDVI index is less than the summer and autumn vegetation index. The spectral characteristics of a water body generally decrease rapidly with increasing wavelength, and the reflectance in the red band increases with increasing sediment concentration. In conclusion, according to the spectral characteristics and image texture features of the ecological land in the Yellow River Delta, the accuracy of image interpretation of such ecological land can be improved.

  13. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network

    PubMed Central

    Zhang, Kai; Long, Erping; Cui, Jiangtao; Zhu, Mingmin; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni

    2017-01-01

    Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN). First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets. These datasets are fed into the CNN to extract high-level features and implement automatic classification and grading. To demonstrate the performance and effectiveness of the deep features extracted in the CNN, we investigate the features combined with support vector machine (SVM) and softmax classifier and compare these with the traditional representative methods. The qualitative and quantitative experimental results demonstrate that our proposed method offers exceptional mean accuracy, sensitivity and specificity: classification (97.07%, 97.28%, and 96.83%) and a three-degree grading area (89.02%, 86.63%, and 90.75%), density (92.68%, 91.05%, and 93.94%) and location (89.28%, 82.70%, and 93.08%). Finally, we developed and deployed a potential automatic diagnostic software for ophthalmologists and patients in clinical applications to implement the validated model. PMID:28306716

  14. [Imaging origins and characteristics analysis of acute and chronic aspiration pneumonia].

    PubMed

    Wang, Kang; Li, Ming; Wang, Xiongbiao; Qin, Jianmin; Wang, Zhi; Zhao, Zehua; Qin, Le; Hua, Yanqing

    2014-11-11

    To discuss about the pathologic and imaging origins and characteristics of CT scaning and X-ray radiography for acute and chronic aspiration pneumonia. Imaging data from 30 patients with aspiration pneumonia were retrospectively analyzed, CT scaning was performed in 27 patients, which PMVR reconstruction was performed in 21 cases;3 exammed by X-ray with 2 used by esophagography. Opaque bodies were detected in trachea by CT scaning in 12 patients.7 patients in acute phase rapidly developed into acute respiratory distress syndrome(ARDS). CT signs of 30 patients with acute and chronic aspiration pneumonia included: centrilobular nodules were detected in 2 cases with acute phase, 4 cases with subacute phase and 4 cases with chronic phase; the imaging of ground glass opacity were detected in 9 cases with acute phase, 2 cases with subacute phase and 3 cases with chronic phase; the imaging of bronchiectasis was detected in 8 cases with chronic phase, which mucilage embolism was detected in 3 of 8 cases; the imaging of atelectasis was detected in 6 cases with chronic phase; the imaging of sheeted consolidation was detected in 5 cases with chronic phase, 8 case with acute phase; the imaging of interstitial fibrosis was detected in 3 cases with chronic phase. Lesions of inferior lobe of right lung were detected in 9 cases with chronic phase, 4 cases with subacute phase, 11 case with acute phase;lesions of inferior lobe of left lung were detected in 6 cases with chronic phase and 3 cases with subacute group, 11 case with acute phase. The imaging features of acute and chronic aspiration pneumonia overlap with GGO and centrilobular nodules in every group. While the imaging features of atelectasis, bronchiectasis or mucilage embolism are found in chronic phase. The chest CT scaning may accurately evaluate the dynamic change of aspiration pneumonia.

  15. Diagnostic accuracy of chest X-rays acquired using a digital camera for low-cost teleradiology.

    PubMed

    Szot, Agnieszka; Jacobson, Francine L; Munn, Samson; Jazayeri, Darius; Nardell, Edward; Harrison, David; Drosten, Ralph; Ohno-Machado, Lucila; Smeaton, Laura M; Fraser, Hamish S F

    2004-02-01

    Store-and-forward telemedicine, using e-mail to send clinical data and digital images, offers a low-cost alternative for physicians in developing countries to obtain second opinions from specialists. To explore the potential usefulness of this technique, 91 chest X-ray images were photographed using a digital camera and a view box. Four independent readers (three radiologists and one pulmonologist) read two types of digital (JPEG and JPEG2000) and original film images and indicated their confidence in the presence of eight features known to be radiological indicators of tuberculosis (TB). The results were compared to a "gold standard" established by two different radiologists, and assessed using receiver operating characteristic (ROC) curve analysis. There was no statistical difference in the overall performance between the readings from the original films and both types of digital images. The size of JPEG2000 images was approximately 120KB, making this technique feasible for slow internet connections. Our preliminary results show the potential usefulness of this technique particularly for tuberculosis and lung disease, but further studies are required to refine its potential.

  16. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less

  17. Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS

    NASA Astrophysics Data System (ADS)

    Sofina, N.; Ehlers, M.

    2012-08-01

    High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.

  18. Computer Graphics Meets Image Fusion: the Power of Texture Baking to Simultaneously Visualise 3d Surface Features and Colour

    NASA Astrophysics Data System (ADS)

    Verhoeven, G. J.

    2017-08-01

    Since a few years, structure-from-motion and multi-view stereo pipelines have become omnipresent in the cultural heritage domain. The fact that such Image-Based Modelling (IBM) approaches are capable of providing a photo-realistic texture along the threedimensional (3D) digital surface geometry is often considered a unique selling point, certainly for those cases that aim for a visually pleasing result. However, this texture can very often also obscure the underlying geometrical details of the surface, making it very hard to assess the morphological features of the digitised artefact or scene. Instead of constantly switching between the textured and untextured version of the 3D surface model, this paper presents a new method to generate a morphology-enhanced colour texture for the 3D polymesh. The presented approach tries to overcome this switching between objects visualisations by fusing the original colour texture data with a specific depiction of the surface normals. Whether applied to the original 3D surface model or a lowresolution derivative, this newly generated texture does not solely convey the colours in a proper way but also enhances the smalland large-scale spatial and morphological features that are hard or impossible to perceive in the original textured model. In addition, the technique is very useful for low-end 3D viewers, since no additional memory and computing capacity are needed to convey relief details properly. Apart from simple visualisation purposes, the textured 3D models are now also better suited for on-surface interpretative mapping and the generation of line drawings.

  19. Finger crease pattern recognition using Legendre moments and principal component analysis

    NASA Astrophysics Data System (ADS)

    Luo, Rongfang; Lin, Tusheng

    2007-03-01

    The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.

  20. Face aging effect simulation model based on multilayer representation and shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Yuancheng; Li, Yan

    2017-09-01

    In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.

  1. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  2. Multi-source remotely sensed data fusion for improving land cover classification

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Bo; Xu, Bing

    2017-02-01

    Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.

  3. Improved Feature Matching for Mobile Devices with IMU.

    PubMed

    Masiero, Andrea; Vettore, Antonio

    2016-08-05

    Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.

  4. Meroe Patera

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [figure removed for brevity, see original site]

    This image is located in Meroe Patera (longitude: 292W/68E, latitude: 7.01), which is a small region within Syrtis Major Planitia. Syrtis Major is a low-relief shield volcano whose lava flows make up a plateau more than 1000 km across. These flows are of Hesperian age (Martian activity of intermediate age) and are believed to have originated from a series of volcanic depressions, called calderas. The caldera complex lies on extensions of the ring faults associated with the Isidis impact basin toward the northeast - thus Syrtis Major volcanism may be associated with post-impact adjustments of the Martian crust.

    The most striking feature in this image is the light streaks across the image that lead to dunes in the lower left region. Wind streaks are albedo markings interpreted to be formed by aeolian action on surface materials. Most are elongate and allow an interpretation of effective wind directions. Many streaks are time variable and thus provide information on seasonal or long-term changes in surface wind directions and strengths. The wind streaks in this image are lighter than their surroundings and are the most common type of wind streak found on Mars. These streaks are formed downwind from crater rims (as in this example), mesas, knobs, and other positive topographic features.

    The dune field in this image is a mixture of barchan dunes and transverse dunes. Dunes are among the most distinctive aeolian feature on Mars, and are similar in form to barchan and transverse dunes on Earth. This similarity is the best evidence to indicate that martian dunes are composed of sand-sized material, although the source and composition of the sand remain controversial. Both the observations of dunes and wind streaks indicate that this location has a windy environment - and these winds are persistent enough to product dunes, as sand-sized material accumulates in this region. These features also indicate that the winds in this region are originating from the right side of the image, and moving towards the left.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  5. Investigating Mars: Pavonis Mons

    NASA Image and Video Library

    2017-11-01

    This image shows part of the southern flank of Pavonis Mons. Several faults run from the left to the right side of the image. Lava flows, and the lava collapse features at the bottom of the image are aligned with the down hill direction (in this case from the top of the image to the bottom). Near the top of the image there are collapse features that run along the faults. The fault may have been been a location for lava tube development. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 15457 Latitude: -1.03884 Longitude: 246.532 Instrument: VIS Captured: 2005-06-09 00:38 https://photojournal.jpl.nasa.gov/catalog/PIA22018

  6. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  7. Predicting Cortical Dark/Bright Asymmetries from Natural Image Statistics and Early Visual Transforms

    PubMed Central

    Cooper, Emily A.; Norcia, Anthony M.

    2015-01-01

    The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624

  8. Image denoising via fundamental anisotropic diffusion and wavelet shrinkage: a comparative study

    NASA Astrophysics Data System (ADS)

    Bayraktar, Bulent; Analoui, Mostafa

    2004-05-01

    Noise removal faces a challenge: Keeping the image details. Resolving the dilemma of two purposes (smoothing and keeping image features in tact) working inadvertently of each other was an almost impossible task until anisotropic dif-fusion (AD) was formally introduced by Perona and Malik (PM). AD favors intra-region smoothing over inter-region in piecewise smooth images. Many authors regularized the original PM algorithm to overcome its drawbacks. We compared the performance of denoising using such 'fundamental' AD algorithms and one of the most powerful multiresolution tools available today, namely, wavelet shrinkage. The AD algorithms here are called 'fundamental' in the sense that the regularized versions center around the original PM algorithm with minor changes to the logic. The algorithms are tested with different noise types and levels. On top of the visual inspection, two mathematical metrics are used for performance comparison: Signal-to-noise ratio (SNR) and universal image quality index (UIQI). We conclude that some of the regu-larized versions of PM algorithm (AD) perform comparably with wavelet shrinkage denoising. This saves a lot of compu-tational power. With this conclusion, we applied the better-performing fundamental AD algorithms to a new imaging modality: Optical Coherence Tomography (OCT).

  9. SPOT satellite mapping of Ice Stream B

    NASA Technical Reports Server (NTRS)

    Merry, Carolyn J.

    1993-01-01

    Numerous features of glaciological significance appear on two adjoining SPOT High Resolution Visible (HRV) images that cover the onset region of ice stream B. Many small-scale features, such as crevasses and drift plumes, have been previously observed in aerial photography. Subtle features, such as long flow traces that have not been mapped previously, are also clear in the satellite imagery. Newly discovered features include ladder-like runners and rungs within certain shear margins, flow traces that are parallel to ice flow, unusual crevasse patterns, and flow traces originating within shear margins. An objective of our work is to contribute to an understanding of the genesis of the features observed in satellite imagery. The genetic possibilities for flow traces, other lineations, bands of transverse crevasses, shear margins, mottles, and lumps and warps are described.

  10. A method for the evaluation of image quality according to the recognition effectiveness of objects in the optical remote sensing image using machine learning algorithm.

    PubMed

    Yuan, Tao; Zheng, Xinqi; Hu, Xuan; Zhou, Wei; Wang, Wei

    2014-01-01

    Objective and effective image quality assessment (IQA) is directly related to the application of optical remote sensing images (ORSI). In this study, a new IQA method of standardizing the target object recognition rate (ORR) is presented to reflect quality. First, several quality degradation treatments with high-resolution ORSIs are implemented to model the ORSIs obtained in different imaging conditions; then, a machine learning algorithm is adopted for recognition experiments on a chosen target object to obtain ORRs; finally, a comparison with commonly used IQA indicators was performed to reveal their applicability and limitations. The results showed that the ORR of the original ORSI was calculated to be up to 81.95%, whereas the ORR ratios of the quality-degraded images to the original images were 65.52%, 64.58%, 71.21%, and 73.11%. The results show that these data can more accurately reflect the advantages and disadvantages of different images in object identification and information extraction when compared with conventional digital image assessment indexes. By recognizing the difference in image quality from the application effect perspective, using a machine learning algorithm to extract regional gray scale features of typical objects in the image for analysis, and quantitatively assessing quality of ORSI according to the difference, this method provides a new approach for objective ORSI assessment.

  11. Airbag Trail Dubbed 'Magic Carpet'

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Click on the image for Airbag Trail Dubbed 'Magic Carpet' (QTVR)

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Magic Carpet Close-upMagic Carpet Close-up HD

    This section of the first color image from the Mars Exploration Rover Spirit has been further processed to produce a sharper look at a trail left by the one of rover's airbags. The drag mark was made after the rover landed and its airbags were deflated and retracted. Scientists have dubbed the region the 'Magic Carpet' after a crumpled portion of the soil that appears to have been peeled away (lower left side of the drag mark). Rocks were also dragged by the airbags, leaving impressions and 'bow waves' in the soil. The mission team plans to drive the rover over to this site to look for additional clues about the composition of the martian soil. This image was taken by Spirit's panoramic camera.

    This extreme close-up image (see insets above) highlights the martian feature that scientists have named 'Magic Carpet' because of its resemblance to a crumpled carpet fold. Scientists think the soil here may have detached from its underlying layer, possibly due to interaction with the Mars Exploration Rover Spirit's airbag after landing. This image was taken on Mars by the rover's panoramic camera.

  12. Inflammatory myofibroblastic tumor: an entity of CT and MR imaging to differentiate from malignant tumors of the sinonasal cavity.

    PubMed

    Yan, Zhongyu; Wang, Yongzhe; Zhang, Zhengyu

    2014-01-01

    Inflammatory myofibroblastic tumor (IMT) is chronic inflammatory lesions of unknown origins. The preoperative diagnosis for tumors in the sinonasal cavity is difficult to distinguish between IMT and aggressive malignancy in most cases. The purpose of this study was to evaluate the imaging features of IMT distinguishing the 2 types of tumors. Computed tomography and magnetic resonance imaging were identified retrospectively with IMT in 14 cases and with aggressive malignancy in 38 cases in the sinonasal cavity proven by pathology. Imaging findings were evaluated, including the configuration, extent, margin, calcification, bone involvement, T1WI and T2WI signal intensity, and degree of enhancement. There was a significant difference between IMT and aggressive malignancy regarding the configuration, extension, calcification, bone change, signal intensity and homogeneous on T2-weighted imaging, and degree of enhancement (P < 0.05). Inflammatory myofibroblastic tumor and aggressive malignancy have some different imaging features that could be helpful in the differentiation between the lesions. Bone erosion with sclerosis, calcification when present, typically homogenous and never hyperintense of T2 appearance, and mild enhancement played an important role in differentiating sinonasal IMT from malignancies.

  13. Automatic Detection of Diseased Tomato Plants Using Thermal and Stereo Visible Light Images

    PubMed Central

    Raza, Shan-e-Ahmed; Prince, Gillian; Clarkson, John P.; Rajpoot, Nasir M.

    2015-01-01

    Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission. PMID:25861025

  14. Winds at the Phoenix Landing Site

    NASA Astrophysics Data System (ADS)

    Holstein-Rathlou, C.; Gunnlaugsson, H. P.; Taylor, P.; Lange, C.; Moores, J.; Lemmon, M.

    2008-12-01

    Local wind speeds and directions have been measured at the Phoenix landing site using the Telltale wind indicator. The Telltale is mounted on top of the meteorological mast at roughly 2 meters height above the surface. The Telltale is a mechanical anemometer consisting of a lightweight cylinder suspended by Kevlar fibers that are deflected under the action of wind. Images taken with the Surface Stereo Imager (SSI) of the Telltale deflection allows the wind speed and direction to be quantified. Winds aloft have been estimated using image series (10 images ~ 50 s apart) taken of the Zenith (Zenith Movies). In contrast enhanced images cloud like features are seen to move through the image field and give indication of directions and angular speed. Wind speeds depend on the height of where these features originate while directions are unambiguously determined. The wind data shows dominant wind directions and diurnal variations, likely caused by slope winds. Recent night time measurements show frost formation on the Telltale mirror. The results will be discussed in terms of global and slope wind modeling and the current calibration of the data is discussed. It will also be illustrated how wind data can aid in interpreting temperature fluctuations seen on the lander.

  15. Using an Improved SIFT Algorithm and Fuzzy Closed-Loop Control Strategy for Object Recognition in Cluttered Scenes

    PubMed Central

    Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo

    2015-01-01

    Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094

  16. Ganges Features

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Context image for PIA03285 Ganges Features

    This image shows part of Ganges Chasma. Several landslides occur at the top of the image, while dunes and canyon floor deposits are visible at the bottom of the image.

    Image information: VIS instrument. Latitude -6.8N, Longitude 312.2E. 17 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  17. High-order statistics of weber local descriptors for image representation.

    PubMed

    Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang

    2015-06-01

    Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.

  18. Guide to Magellan image interpretation

    NASA Technical Reports Server (NTRS)

    Ford, John P.; Plaut, Jeffrey J.; Weitz, Catherine M.; Farr, Tom G.; Senske, David A.; Stofan, Ellen R.; Michaels, Gregory; Parker, Timothy J.; Fulton, D. (Editor)

    1993-01-01

    An overview of Magellan Mission requirements, radar system characteristics, and methods of data collection is followed by a description of the image data, mosaic formats, areal coverage, resolution, and pixel DN-to-dB conversion. The availability and sources of image data are outlined. Applications of the altimeter data to estimate relief, Fresnel reflectivity, and surface slope, and the radiometer data to derive microwave emissivity are summarized and illustrated in conjunction with corresponding SAR image data. Same-side and opposite-side stereo images provide examples of parallax differences from which to measure relief with a lateral resolution many times greater than that of the altimeter. Basic radar interactions with geologic surfaces are discussed with respect to radar-imaging geometry, surface roughness, backscatter modeling, and dielectric constant. Techniques are described for interpreting the geomorphology and surface properties of surficial features, impact craters, tectonically deformed terrain, and volcanic landforms. The morphologic characteristics that distinguish impact craters from volcanic craters are defined. Criteria for discriminating extensional and compressional origins of tectonic features are discussed. Volcanic edifices, constructs, and lava channels are readily identified from their radar outlines in images. Geologic map units are identified on the basis of surface texture, image brightness, pattern, and morphology. Superposition, cross-cutting relations, and areal distribution of the units serve to elucidate the geologic history.

  19. Faces of Pluto

    NASA Image and Video Library

    2015-06-11

    These images, taken by NASA's New Horizons' Long Range Reconnaissance Imager (LORRI), show four different "faces" of Pluto as it rotates about its axis with a period of 6.4 days. All the images have been rotated to align Pluto's rotational axis with the vertical direction (up-down) on the figure, as depicted schematically in the upper left. From left to right, the images were taken when Pluto's central longitude was 17, 63, 130, and 243 degrees, respectively. The date of each image, the distance of the New Horizons spacecraft from Pluto, and the number of days until Pluto closest approach are all indicated in the figure.These images show dramatic variations in Pluto's surface features as it rotates. When a very large, dark region near Pluto's equator appears near the limb, it gives Pluto a distinctly, but false, non-spherical appearance. Pluto is known to be almost perfectly spherical from previous data. These images are displayed at four times the native LORRI image size, and have been processed using a method called deconvolution, which sharpens the original images to enhance features on Pluto. Deconvolution can occasionally introduce "false" details, so the finest details in these pictures will need to be confirmed by images taken from closer range in the next few weeks. All of the images are displayed using the same brightness scale. http://photojournal.jpl.nasa.gov/catalog/PIA19686

  20. Large quasi-circular features beneath frost on Triton

    NASA Technical Reports Server (NTRS)

    Helfenstein, Paul; Veverka, Joseph; Mccarthy, Derek; Lee, Pascal; Hillier, John

    1992-01-01

    Specially processed Voyager 2 images of Neptune's largest moon, Triton, reveal three large quasi-circular features ranging in diameter from 280 to 935 km within Triton's equatorial region. The largest of these features contains a central irregularly shaped area of comparatively low albedo about 380 km in diameter, surrounded by crudely concentric annuli of higher albedo materials. None of the features exhibit significant topographic expression, and all appear to be primarily albedo markings. The features are located within a broad equatorial band of anomalously transparent frost that renders them nearly invisible at the large phase angles (alpha greater than 90 deg) at which Voyager obtained its highest resolution coverage of Triton. The features can be discerned at smaller phase angles (alpha = 66 deg) at which the frost only partially masks underlying albedo contrasts. The origin of the features is uncertain but may have involved regional cryovolcanic activity.

  1. Combining various types of classifiers and features extracted from magnetic resonance imaging data in schizophrenia recognition.

    PubMed

    Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas

    2015-06-30

    We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Automatic differentiation of melanoma and clark nevus skin lesions

    NASA Astrophysics Data System (ADS)

    LeAnder, R. W.; Kasture, A.; Pandey, A.; Umbaugh, S. E.

    2007-03-01

    Skin cancer is the most common form of cancer in the United States. Although melanoma accounts for just 11% of all types of skin cancer, it is responsible for most of the deaths, claiming more than 7910 lives annually. Melanoma is visually difficult for clinicians to differentiate from Clark nevus lesions which are benign. The application of pattern recognition techniques to these lesions may be useful as an educational tool for teaching physicians to differentiate lesions, as well as for contributing information about the essential optical characteristics that identify them. Purpose: This study sought to find the most effective features to extract from melanoma, melanoma in situ and Clark nevus lesions, and to find the most effective pattern-classification criteria and algorithms for differentiating those lesions, using the Computer Vision and Image Processing Tools (CVIPtools) software package. Methods: Due to changes in ambient lighting during the photographic process, color differences between images can occur. These differences were minimized by capturing dermoscopic images instead of photographic images. Differences in skin color between patients were minimized via image color normalization, by converting original color images to relative-color images. Relative-color images also helped minimize changes in color that occur due to changes in the photographic and digitization processes. Tumors in the relative-color images were segmented and morphologically filtered. Filtered, relative-color, tumor features were then extracted and various pattern-classification schemes were applied. Results: Experimentation resulted in four useful pattern classification methods, the best of which was an overall classification rate of 100% for melanoma and melanoma in situ (grouped) and 60% for Clark nevus. Conclusion: Melanoma and melanoma in situ have feature parameters and feature values that are similar enough to be considered one class of tumor that significantly differs from Clark nevus. Consequently, grouping melanoma and melanoma in situ together achieves the best results in classifying and automatically differentiating melanoma from Clark nevus lesions.

  3. Geological mysteries on Ganymede

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This image shows some unusual features on the surface of Jupiter's moon, Ganymede. NASA's Galileo spacecraft imaged this region as it passed Ganymede during its second orbit through the Jovian system. The region is located at 31 degrees latitude, 186 degrees longitude in the north of Marius Regio, a region of ancient dark terrain, and is near the border of a large swathe of younger, heavily tectonised bright terrain known as Nippur Sulcus. Situated in the transitional region between these two terrain types, the area shown here contains many complex tectonic structures, and small fractures can be seen crisscrossing the image. North is to the top-left of the picture, and the sun illuminates the surface from the southeast. This image is centered on an unusual semicircular structure about 33 kilometers (20 miles) across. A 38 kilometer (24 miles) long, remarkably linear feature cuts across its northern extent, and a wide east-west fault system marks its southern boundary. The origin of these features is the subject of much debate among scientists analyzing the data. Was the arcuate structure part of a larger feature? Is the straight lineament the result of internal or external processes? Scientists continue to study this data in order to understand the surface processes occurring on this complex satellite.

    The image covers an area approximately 80 kilometers (50 miles) by 52 kilometers (32 miles) across. The resolution is 189 meters (630 feet) per picture element. The images were taken on September 6, 1996 at a range of 9,971 kilometers (6,232 miles) by the solid state imaging (CCD) system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.

  4. Tradeoff between picture element dimensions and noncoherent averaging in side-looking airborne radar

    NASA Technical Reports Server (NTRS)

    Moore, R. K.

    1979-01-01

    An experiment was performed in which three synthetic-aperture images and one real-aperture image were successively degraded in spatial resolution, both retaining the same number of independent samples per pixel and using the spatial degradation to allow averaging of different numbers of independent samples within each pixel. The original and degraded images were provided to three interpreters familiar with both aerial photographs and radar images. The interpreters were asked to grade each image in terms of their ability to interpret various specified features on the image. The numerical interpretability grades were then used as a quantitative measure of the utility of the different kinds of image processing and different resolutions. The experiment demonstrated empirically that the interpretability is related exponentially to the SGL volume which is the product of azimuth, range, and gray-level resolution.

  5. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  6. Spatial scale and distribution of neurovascular signals underlying decoding of orientation and eye of origin from fMRI data

    PubMed Central

    Harrison, Charlotte; Jackson, Jade; Oh, Seung-Mock; Zeringyte, Vaida

    2016-01-01

    Multivariate pattern analysis of functional magnetic resonance imaging (fMRI) data is widely used, yet the spatial scales and origin of neurovascular signals underlying such analyses remain unclear. We compared decoding performance for stimulus orientation and eye of origin from fMRI measurements in human visual cortex with predictions based on the columnar organization of each feature and estimated the spatial scales of patterns driving decoding. Both orientation and eye of origin could be decoded significantly above chance in early visual areas (V1–V3). Contrary to predictions based on a columnar origin of response biases, decoding performance for eye of origin in V2 and V3 was not significantly lower than that in V1, nor did decoding performance for orientation and eye of origin differ significantly. Instead, response biases for both features showed large-scale organization, evident as a radial bias for orientation, and a nasotemporal bias for eye preference. To determine whether these patterns could drive classification, we quantified the effect on classification performance of binning voxels according to visual field position. Consistent with large-scale biases driving classification, binning by polar angle yielded significantly better decoding performance for orientation than random binning in V1–V3. Similarly, binning by hemifield significantly improved decoding performance for eye of origin. Patterns of orientation and eye preference bias in V2 and V3 showed a substantial degree of spatial correlation with the corresponding patterns in V1, suggesting that response biases in these areas originate in V1. Together, these findings indicate that multivariate classification results need not reflect the underlying columnar organization of neuronal response selectivities in early visual areas. NEW & NOTEWORTHY Large-scale response biases can account for decoding of orientation and eye of origin in human early visual areas V1–V3. For eye of origin this pattern is a nasotemporal bias; for orientation it is a radial bias. Differences in decoding performance across areas and stimulus features are not well predicted by differences in columnar-scale organization of each feature. Large-scale biases in extrastriate areas are spatially correlated with those in V1, suggesting biases originate in primary visual cortex. PMID:27903637

  7. Radiology research in mainland China in the past 10 years: a survey of original articles published in Radiology and European Radiology.

    PubMed

    Zhang, Long Jiang; Wang, Yun Fei; Yang, Zhen Lu; Schoepf, U Joseph; Xu, Jiaqian; Lu, Guang Ming; Li, Enzhong

    2017-10-01

    To evaluate the features and trends of Radiology research in Mainland China through bibliometric analysis of the original articles published in Radiology and European Radiology (ER) between 2006 and 2015. We reviewed the original articles published in Radiology and ER between 2006 and 2015. The following information was abstracted: imaging subspecialty, imaging technique(s) used, research type, sample size, study design, statistical analysis, study results, funding declarations, international collaborations, number of authors, department and province of the first author. All variables were examined longitudinally over time. Radiology research in Mainland China saw a substantial increase in original research articles published, especially in the last 5 years (P < 0.001). Within Mainland China's Radiology research, neuroradiology, vascular/interventional Radiology, and abdominal Radiology were the most productive fields; MR imaging was the most used modality, and a distinct geographic provenience was observed for articles published in Radiology and ER. Radiology research in Mainland China has seen substantial growth in the past 5 years with neuroradiology, vascular/interventional Radiology, and abdominal Radiology as the most productive fields. MR imaging is the most used modality. Article provenience shows a distinct geographical pattern. • Radiology research in Mainland China saw a substantial increase. • Neuroradiology, vascular/interventional Radiology, and abdominal Radiology are the most productive fields. • MRI is the most used modality in Mainland China's Radiology research. • Guangdong, Shanghai, and Beijing are the most productive provinces.

  8. Amazonis Planitia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [figure removed for brevity, see original site] (Released 5 July 2002) This is an image of a crater within part of Amazonis Planitia, located at 22.9N, 152.5W. This image features a number of common features exhibited by Martian craters. The crater is sufficiently large to exhibit a central peak that is seen in the upper right hand corner if the image. Also apparent is the slump blocks on the inside of the crater walls. When the crater was first formed, the crater walls were unstable and subsequently formed a series of landslides over time that formed the hummocky terrain just inside the present crater wall. While these cratering features are common to craters formed on other planetary bodies, such as the moon, the ejecta blanket surrounding the crater displays a morphology that is more unique to Mars. The lobate morphology implies that the ejecta blanket was emplaced in an almost fluid fashion rather than the traditional ballistic ejecta emplacement. This crater morphology occurs on Mars where water ice is suspected to be present just beneath the surface. The impact that created the crater would have enough energy to melt large amounts of water that could form the mud or debris flows that characterize the ejecta morphology that is seen in this image.

  9. Methodology for classification of geographical features with remote sensing images: Application to tidal flats

    NASA Astrophysics Data System (ADS)

    Revollo Sarmiento, G. N.; Cipolletti, M. P.; Perillo, M. M.; Delrieux, C. A.; Perillo, Gerardo M. E.

    2016-03-01

    Tidal flats generally exhibit ponds of diverse size, shape, orientation and origin. Studying the genesis, evolution, stability and erosive mechanisms of these geographic features is critical to understand the dynamics of coastal wetlands. However, monitoring these locations through direct access is hard and expensive, not always feasible, and environmentally damaging. Processing remote sensing images is a natural alternative for the extraction of qualitative and quantitative data due to their non-invasive nature. In this work, a robust methodology for automatic classification of ponds and tidal creeks in tidal flats using Google Earth images is proposed. The applicability of our method is tested in nine zones with different morphological settings. Each zone is processed by a segmentation stage, where ponds and tidal creeks are identified. Next, each geographical feature is measured and a set of shape descriptors is calculated. This dataset, together with a-priori classification of each geographical feature, is used to define a regression model, which allows an extensive automatic classification of large volumes of data discriminating ponds and tidal creeks against other various geographical features. In all cases, we identified and automatically classified different geographic features with an average accuracy over 90% (89.7% in the worst case, and 99.4% in the best case). These results show the feasibility of using freely available Google Earth imagery for the automatic identification and classification of complex geographical features. Also, the presented methodology may be easily applied in other wetlands of the world and perhaps employing other remote sensing imagery.

  10. Enhanced depth imaging optical coherence tomography of choroidal metastasis in 14 eyes.

    PubMed

    Al-Dahmash, Saad A; Shields, Carol L; Kaliki, Swathi; Johnson, Timothy; Shields, Jerry A

    2014-08-01

    To describe the imaging features of choroidal metastasis using enhanced depth imaging optical coherence tomography (EDI-OCT). This retrospective observational case series included 31 eyes with choroidal metastasis. Spectral domain EDI-OCT was performed using Heidelberg Spectralis HRA + OCT. The main outcome measures were imaging features by EDI-OCT. Of 31 eyes with choroidal metastasis imaged with EDI-OCT, 14 (45%) eyes displayed image detail suitable for study. The metastasis originated from carcinoma of the breast (n = 7, 50%), lung (n = 5, 36%), pancreas (n = 1, 7%), and thyroid gland (n = 1, 7%). The mean tumor basal diameter was 6.4 mm, and mean thickness was 2.3 mm by B-scan ultrasonography. The tumor location was submacular in 6 (43%) eyes and extramacular in 8 (57%) eyes. By EDI-OCT, the mean tumor thickness was 987 μm. The most salient EDI-OCT features of the metastasis included anterior compression/obliteration of the overlying choriocapillaris (n = 13, 93%), an irregular (lumpy bumpy) anterior contour (n = 9, 64%), and posterior shadowing (n = 12, 86%). Overlying retinal pigment epithelial abnormalities were noted (n = 11, 78%). Outer retinal features included structural loss of the interdigitation of the cone outer segment tips (n = 9, 64%), the ellipsoid portion of photoreceptors (n = 8, 57%), external limiting membrane (n = 4, 29%), outer nuclear layer (n = 1, 7%), and outer plexiform layer (n = 1, 7%). The inner retinal layers (inner nuclear layer to nerve fiber layer) were normal. Subretinal fluid (n = 11, 79%), subretinal lipofuscin pigment (n = 1, 7%), and intraretinal edema (n = 2, 14%) were identified. The EDI-OCT of choroidal metastasis shows a characteristic lumpy bumpy anterior tumor surface and outer retinal layer disruption with preservation of inner retinal layers.

  11. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  12. Investigating Mars: Arsia Mons

    NASA Image and Video Library

    2017-12-26

    The three large aligned Tharsis volcanoes are Arsia Mons, Pavonis Mons and Ascreaus Mons (from south to north). There are collapse features on all three volcanoes, on the southwestern and northeastern flanks. This alignment may indicate a large fracture/vent system was responsible for the eruptions that formed all three volcanoes. The flows of originating from Arsia Mons are thought to be the youngest of the region. This VIS image shows part of the northeastern flank of Arsia Mons. The scalloped depression are most likely created by collapse of the roof of lava tubes. Lava tubes originate during eruption event, when the margins of a flow harden around a still flowing lava stream. When an eruption ends these can become hollow tubes within the flow. With time, the roof of the tube may collapse into the empty space below. The tubes are linear, so the collapse of the roof creates a linear depression. Arsia Mons is the southernmost of the Tharsis volcanoes. It is 270 miles (450km) in diameter, almost 12 miles (20km) high, and the summit caldera is 72 miles (120km) wide. For comparison, the largest volcano on Earth is Mauna Loa. From its base on the sea floor, Mauna Loa measures only 6.3 miles high and 75 miles in diameter. A large volcanic crater known as a caldera is located at the summit of all of the Tharsis volcanoes. These calderas are produced by massive volcanic explosions and collapse. The Arsia Mons summit caldera is larger than many volcanoes on Earth. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 9417 Latitude: -7.78798 Longitude: 240.585 Instrument: VIS Captured: 2004-01-28 17:39 https://photojournal.jpl.nasa.gov/catalog/PIA22151

  13. Sediment features at the grounding zone and beneath Ekström Ice Shelf, East Antarctica, imaged using on-ice vibroseis.

    NASA Astrophysics Data System (ADS)

    Smith, Emma C.; Eisen, Olaf; Hofstede, Coen; Lambrecht, Astrid; Mayer, Christoph

    2017-04-01

    The grounding zone, where an ice sheet becomes a floating ice shelf, is known to be a key threshold region for ice flow and stability. A better understanding of ice dynamics and sediment transport across such zones will improve knowledge about contemporary and palaeo ice flow, as well as past ice extent. Here we present a set of seismic reflection profiles crossing the grounding zone and continuing to the shelf edge of Ekström Ice Shelf, East Antarctica. Using an on-ice vibroseis source combined with a snowstreamer we have imaged a range of sub-glacial and sub-shelf sedimentary and geomorphological features; from layered sediment deposits to elongated flow features. The acoustic properties of the features as well as their morphology allow us to draw conclusions as to their material properties and origin. These results will eventually be integrated with numerical models of ice dynamics to quantify past and present interactions between ice and the solid Earth in East Antarctica; leading to a better understanding of future contributions of this region to sea-level rise.

  14. Computer-aided diagnosis of psoriasis skin images with HOS, texture and color features: A first comparative study of its kind.

    PubMed

    Shrivastava, Vimal K; Londhe, Narendra D; Sonawane, Rajendra S; Suri, Jasjit S

    2016-04-01

    Psoriasis is an autoimmune skin disease with red and scaly plaques on skin and affecting about 125 million people worldwide. Currently, dermatologist use visual and haptic methods for diagnosis the disease severity. This does not help them in stratification and risk assessment of the lesion stage and grade. Further, current methods add complexity during monitoring and follow-up phase. The current diagnostic tools lead to subjectivity in decision making and are unreliable and laborious. This paper presents a first comparative performance study of its kind using principal component analysis (PCA) based CADx system for psoriasis risk stratification and image classification utilizing: (i) 11 higher order spectra (HOS) features, (ii) 60 texture features, and (iii) 86 color feature sets and their seven combinations. Aggregate 540 image samples (270 healthy and 270 diseased) from 30 psoriasis patients of Indian ethnic origin are used in our database. Machine learning using PCA is used for dominant feature selection which is then fed to support vector machine classifier (SVM) to obtain optimized performance. Three different protocols are implemented using three kinds of feature sets. Reliability index of the CADx is computed. Among all feature combinations, the CADx system shows optimal performance of 100% accuracy, 100% sensitivity and specificity, when all three sets of feature are combined. Further, our experimental result with increasing data size shows that all feature combinations yield high reliability index throughout the PCA-cutoffs except color feature set and combination of color and texture feature sets. HOS features are powerful in psoriasis disease classification and stratification. Even though, independently, all three set of features HOS, texture, and color perform competitively, but when combined, the machine learning system performs the best. The system is fully automated, reliable and accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  16. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images.

    PubMed

    Kimori, Yoshitaka; Baba, Norio; Morone, Nobuhiro

    2010-07-08

    A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  17. Recent advances in oral oncology 2008; squamous cell carcinoma imaging, treatment, prognostication and treatment outcomes.

    PubMed

    Scully, Crispian; Bagan, Jose V

    2009-06-01

    This paper provides a synopsis of the main papers on diagnosis, imaging, treatment, prognostication and treatment outcomes in patients with oral and oropharyngeal squamous cell carcinoma (OSCC) and head and neck SCC (HNSCC) published in 2008 in Oral Oncology - an international interdisciplinary journal which publishes high quality original research, clinical trials and review articles, and all other scientific articles relating to the aetiopathogenesis, epidemiology, prevention, clinical features, diagnosis, treatment and management of patients with neoplasms in the head and neck, and orofacial disease in patients with malignant disease.

  18. The recognition of graphical patterns invariant to geometrical transformation of the models

    NASA Astrophysics Data System (ADS)

    Ileană, Ioan; Rotar, Corina; Muntean, Maria; Ceuca, Emilian

    2010-11-01

    In case that a pattern recognition system is used for images recognition (in robot vision, handwritten recognition etc.), the system must have the capacity to identify an object indifferently of its size or position in the image. The problem of the invariance of recognition can be approached in some fundamental modes. One may apply the similarity criterion used in associative recall. The original pattern is replaced by a mathematical transform that assures some invariance (e.g. the value of two-dimensional Fourier transformation is translation invariant, the value of Mellin transformation is scale invariant). In a different approach the original pattern is represented through a set of features, each of them being coded indifferently of the position, orientation or position of the pattern. Generally speaking, it is easy to obtain invariance in relation with one transformation group, but is difficult to obtain simultaneous invariance at rotation, translation and scale. In this paper we analyze some methods to achieve invariant recognition of images, particularly for digit images. A great number of experiments are due and the conclusions are underplayed in the paper.

  19. Spirit's Tracks around 'Home Plate'

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Annotated Version

    This portion of an image acquired by the Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment camera shows the Spirit rover's winter campaign site. The rover is visible. So is the 'Low Ridge' feature where Spirit was parked with an 11-degree northerly tilt to maximize sunlight on the solar panels during the southern winter season. Tracks made by Spirit on the way to 'Home Plate' and to and from 'Tyrone,' an area of light-toned soils exposed by rover wheel motions, are also evident. The original image is catalogued as PSP_001513_1655_red and was taken Sept. 29, 2006.

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace and Technology Corp., Boulder, Colo.

  20. Evaluation of SIR-A (Shuttle Imaging Radar) images from the Tres Marias region (Minas Gerais State, Brazil) using derived spatial features and registration with MSS-LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Kux, H. J. H.; Dutra, L. V.

    1984-01-01

    Two image processing experiments are described using a MSS-LANDSAT scene from the Tres Marias region and a shuttle Imaging Radar SIR-A image digitized by a vidicon scanner. In the first experiment the study area is analyzed using the original and preprocessed SIR-A image data. The following thematic classes are obtained: (1) water, (2) dense savanna vegetation, (3) sparse savanna vegetation, (4) reforestation areas and (5) bare soil areas. In the second experiment, the SIR-A image was registered together with MSS-LANDSAT bands five, six, and seven. The same five classes mentioned above are obtained. These results are compared with those obtained using solely MSS-LANDSAT data. The spatial information as well as coregistered SIR-A and MSS-LANDSAT data can increase the separability between classes, as compared to the use of raw SIR-A data solely.

  1. Privacy protection schemes for fingerprint recognition systems

    NASA Astrophysics Data System (ADS)

    Marasco, Emanuela; Cukic, Bojan

    2015-05-01

    The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.

  2. Visual Recognition of Age Class and Preference for Infantile Features: Implications for Species-Specific vs Universal Cognitive Traits in Primates

    PubMed Central

    Lemasson, Alban; Nagumo, Sumiharu; Masataka, Nobuo

    2012-01-01

    Despite not knowing the exact age of individuals, humans can estimate their rough age using age-related physical features. Nonhuman primates show some age-related physical features; however, the cognitive traits underlying their recognition of age class have not been revealed. Here, we tested the ability of two species of Old World monkey, Japanese macaques (JM) and Campbell's monkeys (CM), to spontaneously discriminate age classes using visual paired comparison (VPC) tasks based on the two distinct categories of infant and adult images. First, VPCs were conducted in JM subjects using conspecific JM stimuli. When analyzing the side of the first look, JM subjects significantly looked more often at novel images. Based on analyses of total looking durations, JM subjects looked at a novel infant image longer than they looked at a familiar adult image, suggesting the ability to spontaneously discriminate between the two age classes and a preference for infant over adult images. Next, VPCs were tested in CM subjects using heterospecific JM stimuli. CM subjects showed no difference in the side of their first look, but looked at infant JM images longer than they looked at adult images; the fact that CMs were totally naïve to JMs suggested that the attractiveness of infant images transcends species differences. This is the first report of visual age class recognition and a preference for infant over adult images in nonhuman primates. Our results suggest not only species-specific processing for age class recognition but also the evolutionary origins of the instinctive human perception of baby cuteness schema, proposed by the ethologist Konrad Lorenz. PMID:22685529

  3. Visual recognition of age class and preference for infantile features: implications for species-specific vs universal cognitive traits in primates.

    PubMed

    Sato, Anna; Koda, Hiroki; Lemasson, Alban; Nagumo, Sumiharu; Masataka, Nobuo

    2012-01-01

    Despite not knowing the exact age of individuals, humans can estimate their rough age using age-related physical features. Nonhuman primates show some age-related physical features; however, the cognitive traits underlying their recognition of age class have not been revealed. Here, we tested the ability of two species of Old World monkey, Japanese macaques (JM) and Campbell's monkeys (CM), to spontaneously discriminate age classes using visual paired comparison (VPC) tasks based on the two distinct categories of infant and adult images. First, VPCs were conducted in JM subjects using conspecific JM stimuli. When analyzing the side of the first look, JM subjects significantly looked more often at novel images. Based on analyses of total looking durations, JM subjects looked at a novel infant image longer than they looked at a familiar adult image, suggesting the ability to spontaneously discriminate between the two age classes and a preference for infant over adult images. Next, VPCs were tested in CM subjects using heterospecific JM stimuli. CM subjects showed no difference in the side of their first look, but looked at infant JM images longer than they looked at adult images; the fact that CMs were totally naïve to JMs suggested that the attractiveness of infant images transcends species differences. This is the first report of visual age class recognition and a preference for infant over adult images in nonhuman primates. Our results suggest not only species-specific processing for age class recognition but also the evolutionary origins of the instinctive human perception of baby cuteness schema, proposed by the ethologist Konrad Lorenz.

  4. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  5. Pattern classification approach to characterizing solitary pulmonary nodules imaged on high-resolution computed tomography

    NASA Astrophysics Data System (ADS)

    McNitt-Gray, Michael F.; Hart, Eric M.; Goldin, Jonathan G.; Yao, Chih-Wei; Aberle, Denise R.

    1996-04-01

    The purpose of our study was to characterize solitary pulmonary nodules (SPN) as benign or malignant based on pattern classification techniques using size, shape, density and texture features extracted from HRCT images. HRCT images of patients with a SPN are acquired, routed through a PACS and displayed on a thoracic radiology workstation. Using the original data, the SPN is semiautomatically contoured using a nodule/background threshold. The contour is used to calculate size and several shape parameters, including compactness and bending energy. Pixels within the interior of the contour are used to calculate several features including: (1) nodule density-related features, such as representative Hounsfield number and moment of inertia, and (2) texture measures based on the spatial gray level dependence matrix and fractal dimension. The true diagnosis of the SPN is established by histology from biopsy or, in the case of some benign nodules, extended follow-up. Multi-dimensional analyses of the features are then performed to determine which features can discriminate between benign and malignant nodules. When a sufficient number of cases are obtained two pattern classifiers, a linear discriminator and a neural network, are trained and tested using a select subset of features. Preliminary data from nine (9) nodule cases have been obtained and several features extracted. While the representative CT number is a reasonably good indicator, it is an inconclusive predictor of SPN diagnosis when considered by itself. Separation between benign and malignant nodules improves when other features, such as the distribution of density as measured by moment of inertia, are included in the analysis. Software has been developed and preliminary results have been obtained which show that individual features may not be sufficient to discriminate between benign and malignant nodules. However, combinations of these features may be able to discriminate between these two classes. With additional cases and more features, we will be able to perform a feature selection procedure and ultimately to train and test pattern classifiers in this discrimination task.

  6. Signs of Soft-Sediment Deformation at 'Slickrock'

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Geological examination of bedding textures indicates three stratigraphic units in an area called 'Slickrock' located in the martian rock outcrop that NASA's Opportunity examined for several weeks. This is an image Opportunity took from a distance of 2.1 meters (6.9 feet) during the rover's 45th sol on Mars (March 10, 2004) and shows a scour surface or ripple trough lamination. These features are consistent with sedimentation on a moist surface where wind-driven processes may also have occurred.

    [figure removed for brevity, see original site] Figure 1

    In Figure 1, interpretive blue lines indicate boundaries between the units. The upper blue line may coincide with a scour surface. The lower and upper units have features suggestive of ripples or early soft-sediment deformation. The central unit is dominated by fine, parallel stratification, which could have been produced by wind-blown ripples.

    [figure removed for brevity, see original site] Figure 2

    In Figure 2, features labeled with red letters are shown in an enlargement of portions of the image. 'A' is a scour surface characterized by truncation of the underlying fine layers, or laminae. 'B' is a possible soft-sediment buckling characterized by a 'teepee' shaped structure. 'C' shows a possible ripple beneath the arrow and a possible ripple cross-lamination to the left of the arrow, along the surface the arrow tip touches. 'D' is a scour surface or ripple trough lamination. These features are consistent with sedimentation on a moist surface where wind-driven processes may also have occurred.

  7. Seasonally Active Slipface Avalanches in the North Polar Sand Sea of Mars: Evidence for a Wind-Related Origin

    NASA Technical Reports Server (NTRS)

    Horgan, Briony H. N.; Bell, James F., III

    2012-01-01

    Meter-scale MRO/HiRISE camera images of dune slipfaces in the north polar sand sea of Mars reveal the presence of deep alcoves above depositional fans. These features are apparently active under current climatic conditions, because they form between observations taken in subsequent Mars years. Recently, other workers have hypothesized that the alcoves form due to destabilization and mass-wasting during sublimation of CO2 frost in the spring. While there is evidence for springtime modification of these features, our analysis of early springtime images reveals that over 80% of the new alcoves are visible underneath the CO2 frost. Thus, we present an alternative hypothesis that formation of new alcoves and fans occurs prior to CO2 deposition. We propose that fans and alcoves form primarily by aeolian processes in the mid- to late summer, through a sequence of aeolian deposition on the slipface, over-steepening, failure, and dry granular flow. An aeolian origin is supported by the orientations of the alcoves, which are consistent with recent wind directions. Furthermore, morphologically similar but much smaller alcoves form on terrestrial dune slipfaces, and the size differences between the terrestrial and Martian features may reflect cohesion in the near-subsurface of the Martian features. The size and preservation of the largest alcoves on the Martian slipfaces also support the presence of an indurated surface layer; thus, new alcoves might be sites of early spring CO2 sublimation and secondary mass-wasting because they act as a window to looser, less indurated materials that warm up more quickly in the spring.

  8. Classification of high dimensional multispectral image data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1993-01-01

    A method for classifying high dimensional remote sensing data is described. The technique uses a radiometric adjustment to allow a human operator to identify and label training pixels by visually comparing the remotely sensed spectra to laboratory reflectance spectra. Training pixels for material without obvious spectral features are identified by traditional means. Features which are effective for discriminating between the classes are then derived from the original radiance data and used to classify the scene. This technique is applied to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data taken over Cuprite, Nevada in 1992, and the results are compared to an existing geologic map. This technique performed well even with noisy data and the fact that some of the materials in the scene lack absorption features. No adjustment for the atmosphere or other scene variables was made to the data classified. While the experimental results compare favorably with an existing geologic map, the primary purpose of this research was to demonstrate the classification method, as compared to the geology of the Cuprite scene.

  9. Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.

    PubMed

    Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme

    2014-03-01

    Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.

  10. [A graph cuts-based interactive method for segmentation of magnetic resonance images of meningioma].

    PubMed

    Li, Shuan-qiang; Feng, Qian-jin; Chen, Wu-fan; Lin, Ya-zhong

    2011-06-01

    For accurate segmentation of the magnetic resonance (MR) images of meningioma, we propose a novel interactive segmentation method based on graph cuts. The high dimensional image features was extracted, and for each pixel, the probabilities of its origin, either the tumor or the background regions, were estimated by exploiting the weighted K-nearest neighborhood classifier. Based on these probabilities, a new energy function was proposed. Finally, a graph cut optimal framework was used for the solution of the energy function. The proposed method was evaluated by application in the segmentation of MR images of meningioma, and the results showed that the method significantly improved the segmentation accuracy compared with the gray level information-based graph cut method.

  11. Appearance of osteolysis with melorheostosis: redefining the disease or a new disorder? A novel case report with multimodality imaging.

    PubMed

    Osher, Lawrence S; Blazer, Marie Mantini; Bumpus, Kelly

    2013-01-01

    We present a case report of melorheostosis with the novel radiographic finding of underlying cortical resorption. A number of radiographic patterns of melorheostosis have been described; however, the combination of new bone formation and resorption of the original cortex appears unique. Although the presence of underlying lysis has been postulated in published studies, direct radiographic evidence of bony resorption in melorheostosis has not been reported. These findings can be subtle and might go unnoticed using standard imaging. An in-depth review of the radiographic features is presented, including multimodality imaging with magnetic resonance imaging and computed tomography. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Structural sensitivity of x-ray Bragg projection ptychography to domain patterns in epitaxial thin films

    NASA Astrophysics Data System (ADS)

    Hruszkewycz, S. O.; Zhang, Q.; Holt, M. V.; Highland, M. J.; Evans, P. G.; Fuoss, P. H.

    2016-10-01

    Bragg projection ptychography (BPP) is a coherent diffraction imaging technique capable of mapping the spatial distribution of the Bragg structure factor in nanostructured thin films. Here, we show that, because these images are projections, the structural sensitivity of the resulting images depends on the film thickness and the aspect ratio and orientation of the features of interest and that image interpretation depends on these factors. We model changes in contrast in the BPP reconstructions of simulated PbTiO3 ferroelectric thin films with meandering 180∘ stripe domains as a function of film thickness, discuss their origin, and comment on the implication of these factors on the design of BPP experiments of general nanostructured films.

  13. The Research of Spectral Reconstruction for Large Aperture Static Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Lv, H.; Lee, Y.; Liu, R.; Fan, C.; Huang, Y.

    2018-04-01

    Imaging spectrometer obtains or indirectly obtains the spectral information of the ground surface feature while obtaining the target image, which makes the imaging spectroscopy has a prominent advantage in fine characterization of terrain features, and is of great significance for the study of geoscience and other related disciplines. Since the interference data obtained by interferometric imaging spectrometer is intermediate data, which must be reconstructed to achieve the high quality spectral data and finally used by users. The difficulty to restrict the application of interferometric imaging spectroscopy is to reconstruct the spectrum accurately. Based on the original image acquired by Large Aperture Static Imaging Spectrometer as the input, this experiment selected the pixel that is identified as crop by artificial recognition, extract and preprocess the interferogram to recovery the corresponding spectrum of this pixel. The result shows that the restructured spectrum formed a small crest near the wavelength of 0.55 μm with obvious troughs on both sides. The relative reflection intensity of the restructured spectrum rises abruptly at the wavelength around 0.7 μm, forming a steep slope. All these characteristics are similar with the spectral reflection curve of healthy green plants. It can be concluded that the experimental result is consistent with the visual interpretation results, thus validating the effectiveness of the scheme for interferometric imaging spectrum reconstruction proposed in this paper.

  14. Investigating Mars: Pavonis Mons

    NASA Image and Video Library

    2017-11-09

    This image shows the southern flank of Pavonis Mons. The large sinuous channel at the bottom of the image is located at the uppermost part of the volcano where collapse features are following the regional linear trend. A lava tube of this size indicates a high volume of lava. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 45493 Latitude: -0.197065 Longitude: 246.516 Instrument: VIS Captured: 2012-03-17 03:39 https://photojournal.jpl.nasa.gov/catalog/PIA22025

  15. CT, MRI, and 18F-FDG PET/CT findings of malignant peripheral nerve sheath tumor of the head and neck.

    PubMed

    Kim, Ha Youn; Hwang, Ji Young; Kim, Hyung-Jin; Kim, Yi Kyung; Cha, Jihoon; Park, Gyeong Min; Kim, Sung Tae

    2017-10-01

    Background Malignant peripheral nerve sheath tumor (MPNST) is a highly malignant tumor and rarely occurs in the head and neck. Purpose To describe the imaging features of MPNST of the head and neck. Material and Methods We retrospectively analyzed computed tomography (CT; n = 14), magnetic resonance imaging (MRI; n = 16), and 18 F-FDG PET/CT (n = 5) imaging features of 18 MPNSTs of the head and neck in 17 patients. Special attention was paid to determine the nerve of origin from which the tumor might have arisen. Results All lesions were well-defined (n = 3) or ill-defined (n = 15) masses (mean, 6.1 cm). Lesions were at various locations but most commonly the neck (n = 8), followed by the intracranial cavity (n = 3), paranasal sinus (n = 2), and orbit (n = 2). The nerve of origin was inferred for 11 lesions: seven in the neck, two in the orbit, one in the cerebellopontine angle, and one on the parietal scalp. Attenuation, signal intensity, and enhancement pattern of the lesions on CT and MRI were non-specific. Necrosis/hemorrhage/cystic change within the lesion was considered to be present on images in 13 and bone change in nine. On 18 F-FDG PET/CT images, all five lesions demonstrated various hypermetabolic foci with maximum standard uptake value (SUV max ) from 3.2 to 14.6 (mean, 7.16 ± 4.57). Conclusion MPNSTs can arise from various locations in the head and neck. Though non-specific, a mass with an ill-defined margin along the presumed course of the cranial nerves may aid the diagnosis of MPSNT in the head and neck.

  16. Analysis of Texture Using the Fractal Model

    NASA Technical Reports Server (NTRS)

    Navas, William; Espinosa, Ramon Vasquez

    1997-01-01

    Properties such as the fractal dimension (FD) can be used for feature extraction and classification of regions within an image. The FD measures the degree of roughness of a surface, so this number is used to characterize a particular region, in order to differentiate it from another. There are two basic approaches discussed in the literature to measure FD: the blanket method, and the box counting method. Both attempt to measure FD by estimating the change in surface area with respect to the change in resolution. We tested both methods but box counting resulted computationally faster and gave better results. Differential Box Counting (DBC) was used to segment a collage containing three textures. The FD is independent of directionality and brightness so five features were used derived from the original image to account for directionality and gray level biases. FD can not be measured on a point, so we use a window that slides across the image giving values of FD to the pixel on the center of the window. Windowing blurs the boundaries of adjacent classes, so an edge-preserving, feature-smoothing algorithm is used to improve classification within segments and to make the boundaries sharper. Segmentation using DBC was 90.8910 accurate.

  17. Development of Sorting System for Fishes by Feed-forward Neural Networks Using Rotation Invariant Features

    NASA Astrophysics Data System (ADS)

    Shiraishi, Yuhki; Takeda, Fumiaki

    In this research, we have developed a sorting system for fishes, which is comprised of a conveyance part, a capturing image part, and a sorting part. In the conveyance part, we have developed an independent conveyance system in order to separate one fish from an intertwined group of fishes. After the image of the separated fish is captured in the capturing part, a rotation invariant feature is extracted using two-dimensional fast Fourier transform, which is the mean value of the power spectrum with the same distance from the origin in the spectrum field. After that, the fishes are classified by three-layered feed-forward neural networks. The experimental results show that the developed system classifies three kinds of fishes captured in various angles with the classification ratio of 98.95% for 1044 captured images of five fishes. The other experimental results show the classification ratio of 90.7% for 300 fishes by 10-fold cross validation method.

  18. Featured Image: New Detail in the Toothbrush Cluster

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2018-01-01

    This spectacular composite (click here for the full image) reveals the galaxy cluster 1RXS J0603.3+4214, known as the Toothbrush cluster due to the shape of its most prominent radio relic. Featured in a recent publication led by Kamlesh Rajpurohit (Thuringian State Observatory, Germany), this image contains new Very Large Array (VLA) 1.5-GHz observations (red) showing the radio emission within the cluster. This is composited with a Chandra view of the X-ray emitting gas of the cluster (blue) and an optical image of the background from Subaru data. The new deep VLA data totaling 26 hours of observations provides a detailed look at the complex structure within the Toothbrush relic, revealing enigmatic filaments and twists (see below). This new data will help us to explore the possible merger history of this cluster, which is theorized to have caused the unusual shapes we see today. For more information, check out the original article linked below.High resolution VLA 12 GHz image of the Toothbrush showing the complex, often filamentary structures. [Rajpurohit et al. 2018]CitationK. Rajpurohit et al 2018 ApJ 852 65. doi:10.3847/1538-4357/aa9f13

  19. The elimination of colour blocks in remote sensing images in VR

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Li, Guohui; Su, Zhenyu

    2018-02-01

    Aiming at the characteristics in HSI colour space of remote sensing images at different time in VR, a unified colour algorithm is proposed. First the method converted original image from RGB colour space to HSI colour space. Then, based on the invariance of the hue before and after the colour adjustment in the HSI colour space and the brightness translational features of the image after the colour adjustment, establish the linear model which satisfied these characteristics of the image. And then determine the range of the parameters in the model. Finally, according to the established colour adjustment model, the experimental verification is carried out. The experimental results show the proposed model can effectively recover the clear image, and the algorithm is faster. The experimental results show the proposed algorithm can effectively enhance the image clarity and can solve the pigment block problem well.

  20. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  1. Simulating Colour Vision Deficiency from a Spectral Image.

    PubMed

    Shrestha, Raju

    2016-01-01

    People with colour vision deficiency (CVD) have difficulty seeing full colour contrast and can miss some of the features in a scene. As a part of universal design, researcher have been working on how to modify and enhance the colour of images in order to make them see the scene with good contrast. For this, it is important to know how the original colour image is seen by different individuals with CVD. This paper proposes a methodology to simulate accurate colour deficient images from a spectral image using cone sensitivity of different cases of deficiency. As the method enables generation of accurate colour deficient image, the methodology is believed to help better understand the limitations of colour vision deficiency and that in turn leads to the design and development of more effective imaging technologies for better and wider accessibility in the context of universal design.

  2. Evaluation of multichannel Wiener filters applied to fine resolution passive microwave images of first-year sea ice

    NASA Technical Reports Server (NTRS)

    Full, William E.; Eppler, Duane T.

    1993-01-01

    The effectivity of multichannel Wiener filters to improve images obtained with passive microwave systems was investigated by applying Wiener filters to passive microwave images of first-year sea ice. Four major parameters which define the filter were varied: the lag or pixel offset between the original and the desired scenes, filter length, the number of lines in the filter, and the weight applied to the empirical correlation functions. The effect of each variable on the image quality was assessed by visually comparing the results. It was found that the application of multichannel Wiener theory to passive microwave images of first-year sea ice resulted in visually sharper images with enhanced textural features and less high-frequency noise. However, Wiener filters induced a slight blocky grain to the image and could produce a type of ringing along scan lines traversing sharp intensity contrasts.

  3. Dark Spots on Titan

    NASA Image and Video Library

    2005-05-02

    This recent image of Titan reveals more complex patterns of bright and dark regions on the surface, including a small, dark, circular feature, completely surrounded by brighter material. During the two most recent flybys of Titan, on March 31 and April 16, 2005, Cassini captured a number of images of the hemisphere of Titan that faces Saturn. The image at the left is taken from a mosaic of images obtained in March 2005 (see PIA06222) and shows the location of the more recently acquired image at the right. The new image shows intriguing details in the bright and dark patterns near an 80-kilometer-wide (50-mile) crater seen first by Cassini's synthetic aperture radar experiment during a Titan flyby in February 2005 (see PIA07368) and subsequently seen by the imaging science subsystem cameras as a dark spot (center of the image at the left). Interestingly, a smaller, roughly 20-kilometer-wide (12-mile), dark and circular feature can be seen within an irregularly-shaped, brighter ring, and is similar to the larger dark spot associated with the radar crater. However, the imaging cameras see only brightness variations, and without topographic information, the identity of this feature as an impact crater cannot be conclusively determined from this image. The visual infrared mapping spectrometer, which is sensitive to longer wavelengths where Titan's atmospheric haze is less obscuring -- observed this area simultaneously with the imaging cameras, so those data, and perhaps future observations by Cassini's radar, may help to answer the question of this feature's origin. The new image at the right consists of five images that have been added together and enhanced to bring out surface detail and to reduce noise, although some camera artifacts remain. These images were taken with the Cassini spacecraft narrow-angle camera using a filter sensitive to wavelengths of infrared light centered at 938 nanometers -- considered to be the imaging science subsystem's best spectral filter for observing the surface of Titan. This view was acquired from a distance of 33,000 kilometers (20,500 miles). The pixel scale of this image is 390 meters (0.2 miles) per pixel, although the actual resolution is likely to be several times larger. http://photojournal.jpl.nasa.gov/catalog/PIA06234

  4. Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation

    NASA Astrophysics Data System (ADS)

    Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas

    2013-03-01

    High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.

  5. Location and Geologic Setting for the Three U.S. Mars Landers

    NASA Technical Reports Server (NTRS)

    Parker, T. J.; Kirk, R. L.

    1999-01-01

    Super resolution of the horizon at both Viking landing sites has revealed "new" features we use for triangulation, similar to the approach used during the Mars Pathfinder Mission. We propose alternative landing site locations for both landers for which we believe the confidence is very high. Super resolution of VL-1 images also reveals some of the drift material at the site to consist of gravel-size deposits. Since our proposed location for VL-2 is NOT on the Mie ejecta blanket, the blocky surface around the lander may represent the meter-scale texture of "smooth palins" in the region. The Viking Lander panchromatic images typically offer more repeat coverage than does the IMP on Mars Pathfinder, due to the longer duration of these landed missions. Sub-pixel offsets, necessary for super resolution to work, appear to be attributable to thermal effects on the lander and settling of the lander over time. Due to the greater repeat coverage (particularly in the near and mid-fields) and all-panchromatic images, the gain in resolution by super resolution processing is better for Viking than it is with most IMP image sequences. This enhances the study of textural details near the lander and enables the identification rock and surface textures at greater distances from the lander. Discernment of stereo in super resolution im-ages is possible to great distances from the lander, but is limited by the non-rotating baseline between the two cameras and the shorter height of the cameras above the ground compared to IMP. With super resolution, details of horizon features, such as blockiness and crater rim shapes, may be better correlated with Orbiter images. A number of horizon features - craters and ridges - were identified at VL-1 during the misison, and a few hils and subtle ridges were identified at VL-2. We have added a few "new" horizon features for triangulation at the VL-2 landing site in Utopia Planitia. These features were used for independent triangulation with features visible in Viking Orbiter and MGS MOC images, though the actual location of VL-1 lies in a data dropout in the MOC image of the area. Additional information is contained in the original extended abstract.

  6. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effortmore » was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.« less

  7. Predictive value of dual-energy spectral computed tomographic imaging on the histological origin of carcinomas in the ampullary region.

    PubMed

    Wei, Wei; Yu, Yongqiang; Lv, Weifu; Deng, Kexue; Yuan, Lei; Zhao, Yingming

    2014-08-01

    To investigate the value of dual-energy spectral computed tomographic imaging (DESCT) to predict the origin of carcinomas in the ampullary region. Fifty-seven patients with suspected ampullary region carcinomas underwent DESCT prior to biopsy or surgery. Among those patients, 30 were pancreatic adenocarcinomas, 11 were biliary adenocarcinomas, 16 were adenocarcinomas of the ampulla diagnosed by biopsy and/or pathological examination before or after surgical operation. We compared the CT spectral imaging features among the adenocarcinomas with the above-mentioned three different origins. Iodine concentration thresholds of 16.36, 21.86, and 21.86 mg/mL yielded a sensitivity and specificity of 100% for distinguishing between common bile duct adenocarcinomas and pancreatic adenocarcinomas in the arterial phase (AP), portal venous phase (PP), and delayed phase (DP), respectively. Thresholds of 16.70, 24.33, and 26.43 mg/mL yielded a sensitivity and specificity of 100% for distinguishing between common bile duct adenocarcinomas and ampullary adenocarcinomas in the AP, PP, and DP, respectively. Iodine concentration thresholds of 16.66 and 17.78 mg/mL yielded a sensitivity and specificity of 100% for distinguishing between ampullary adenocarcinomas and pancreatic adenocarcinomas in the PP and DP, respectively. DESCT with multiple parameters can provide useful diagnostic information and may be used to predict the histological origin of carcinomas in the ampullary region.

  8. Radiometry simulation within the end-to-end simulation tool SENSOR

    NASA Astrophysics Data System (ADS)

    Wiest, Lorenz; Boerner, Anko

    2001-02-01

    12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.

  9. Investigating Mars: Arsia Mons

    NASA Image and Video Library

    2018-01-01

    The three large aligned Tharsis volcanoes are Arsia Mons, Pavonis Mons and Ascreaus Mons (from south to north). There are collapse features on all three volcanoes, on the southwestern and northeastern flanks. This alignment may indicate a large fracture/vent system was responsible for the eruptions that formed all three volcanoes. The flows originating from Arsia Mons are thought to be the youngest of the region. This VIS image shows part of the northeastern flank of Arsia Mons at the summit caldera. In this region the summit caldera does not have a steep margin most likely due to renewed volcanic flows within this region of the caldera. The scalloped depressions at the top of the image are most likely created by collapse of the roof of lava tubes. Lava tubes originate during eruption event, when the margins of a flow harden around a still flowing lava stream. When an eruption ends these can become hollow tubes within the flow. With time, the roof of the tube may collapse into the empty space below. The tubes are linear, so the collapse of the roof creates a linear depression. Arsia Mons is the southernmost of the Tharsis volcanoes. It is 270 miles (450km) in diameter, almost 12 miles (20km) high, and the summit caldera is 72 miles (120km) wide. For comparison, the largest volcano on Earth is Mauna Loa. From its base on the sea floor, Mauna Loa measures only 6.3 miles high and 75 miles in diameter. A large volcanic crater known as a caldera is located at the summit of all of the Tharsis volcanoes. These calderas are produced by massive volcanic explosions and collapse. The Arsia Mons summit caldera is larger than many volcanoes on Earth. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 17716 Latitude: -8.11179 Longitude: 240.245 Instrument: VIS Captured: 2005-12-12 00:29 https://photojournal.jpl.nasa.gov/catalog/PIA22155

  10. ARC-1990-A90-3000

    NASA Image and Video Library

    1990-08-21

    After traveling more than 1.5 billion km (948 million mi.), the Magellan spacecraft was inserted into orbit around Venus on Aug. 10, 1990. This mosaic consists of adjacent pieces of two magellan image strips obtained in the first radar test. The radar test was part of a planned In-Orbit Checkout sequence designed to prepare the magellan spacecraft and radar to begin mapping after Aug. 31. The strip on the left was returned to the Goldstone Deep Space Network station in California; the strip to the right was received at the DSN in Canberra, Australia. A third station that will be receiving Magellan data is locaterd near Madrid, Spain. Each image strip is 20 km (12 mi.) wide and 16,000 km (10,000 mi.) long. This mosaic is a small portion 80 km (50 mi.) long. This image is centered at 21 degrees north latitude and 286.8 degrees east longitude, southeast of a volcanic highland region called Beta Regio. The resolution of the image is about 120 meters (400 feet), 10 times better than revious images of the same area of Venus, revealing many new geologic features. The bright line trending northwest-southeast across the center of the image is a fracture or fault zone cutting the volcanic plains. In the upper lest corner of the image, a multiple-ring circular feature of probable volcanic origin can be seen, approx. 4.27 km (2.65 mi.) across. The bright and dark variations seen in the plains surrounding these features correspond to volcanic lava flows of varying ages. The volcanic lava flows in the southern half of the image have been cut by north-south trending faults. This area is similar geologically to volcanic deposits seen on Earth at Hawaii and the Snake River Plains in Idaho.

  11. Investigating Mars: Melas Chasma

    NASA Image and Video Library

    2017-12-06

    Melas Chasma is part of the largest canyon system on Mars, Valles Marineris. At only 563 km long (349 miles) it is not the longest canyon, but it is the widest. Located in the center of Valles Marineris, it has depths up to 9 km below the surrounding plains, and is the location of many large landslide deposits, as will as layered materials and sand dunes. There is evidence of both water and wind action as modes of formation for many of the interior deposits. This VIS image is located right at the edge of the canyon with the surrounding plains - the flat area at the bottom of the image. Some small landslide deposits are visible originating at the cliff side. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 26762 Latitude: -13.4233 Longitude: 287.973 Instrument: VIS Captured: 2007-12-26 19:46 https://photojournal.jpl.nasa.gov/catalog/PIA22136

  12. Investigating Mars: Coprates Chasma

    NASA Image and Video Library

    2017-10-06

    Coprates Chasma is one of the numerous canyons that make up Valles Marineris. The chasma stretches for 960 km (600 miles) from Melas Chasma to the west and Capri Chasma to the east. Landslide deposits, layered materials and sand dunes cover a large portion of the chasma floor. This image is located in central Coprates Chasma. The brighter materials at the bottom of the image are layered deposits. It is unknown how deep these canyon deposits were when they formed. The layering is only visible due to erosion, making it difficult to estimate the original thickness. While layered deposits can be found on the floor of Coprates Chasma, they are most commonly found along the lower elevations and at the bottom of the cliff faces in the canyon. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 51810 Latitude: -12.6848 Longitude: 295.197 Instrument: VIS Captured: 2013-08-18 22:56 https://photojournal.jpl.nasa.gov/catalog/PIA22000

  13. Diffraction and imaging study of imperfections of crystallized lysozyme with coherent X-rays

    NASA Technical Reports Server (NTRS)

    Hu, Z. W.; Chu, Y. S.; Lai, B.; Thomas, B. R.; Chernov, A. A.

    2004-01-01

    Phase-contrast X-ray diffraction imaging and high-angular-resolution diffraction combined with phase-contrast radiographic imaging were employed to characterize defects and perfection of a uniformly grown tetragonal lysozyme crystal in the symmetric Laue case. The full-width at half-maximum (FWHM) of a 4 4 0 rocking curve measured from the original crystal was approximately 16.7 arcsec and imperfections including line defects, inclusions and other microdefects were observed in the diffraction images of the crystal. The observed line defects carry distinct dislocation features running approximately along the <1 1 0> growth front and have been found to originate mostly in a central growth area and occasionally in outer growth regions. Inclusions of impurities or formations of foreign particles in the central growth region are resolved in the images with high sensitivity to defects. Slow dehydration led to the broadening of a fairly symmetric 4 4 0 rocking curve by a factor of approximately 2.6, which was primarily attributed to the dehydration-induced microscopic effects that are clearly shown in X-ray diffraction images. The details of the observed defects and the significant change in the revealed microstructures with drying provide insight into the nature of imperfections, nucleation and growth, and the properties of protein crystals.

  14. Pattern recognition and feature extraction with an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel

    2016-09-01

    Pattern recognition and localization along with feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for the recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital- only methods. Starting from the integral representation of the GHT, it is possible to device an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a rotating pupil mask for orientation variation, implemented on a high-contrast spatial light modulator (SLM). Real-time (as limited by the frame rate of the device used to capture the GHT) can also be achieved, allowing for the processing of video sequences. Besides, by thresholding of the GHT (with the aid of another SLM) and inverse transforming (which is optically achieved in the incoherent system under appropriate focusing setting), the previously detected features of interest can be extracted.

  15. Improving the mapping of crop types in the Midwestern U.S. by fusing Landsat and MODIS satellite data

    NASA Astrophysics Data System (ADS)

    Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.

    2017-06-01

    Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.

  16. Balance the nodule shape and surroundings: a new multichannel image based convolutional neural network scheme on lung nodule diagnosis

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Zheng, Bin; Huang, Xia; Qian, Wei

    2017-03-01

    Deep learning is a trending promising method in medical image analysis area, but how to efficiently prepare the input image for the deep learning algorithms remains a challenge. In this paper, we introduced a novel artificial multichannel region of interest (ROI) generation procedure for convolutional neural networks (CNN). From LIDC database, we collected 54880 benign nodule samples and 59848 malignant nodule samples based on the radiologists' annotations. The proposed CNN consists of three pairs of convolutional layers and two fully connected layers. For each original ROI, two new ROIs were generated: one contains the segmented nodule which highlighted the nodule shape, and the other one contains the gradient of the original ROI which highlighted the textures. By combining the three channel images into a pseudo color ROI, the CNN was trained and tested on the new multichannel ROIs (multichannel ROI II). For the comparison, we generated another type of multichannel image by replacing the gradient image channel with a ROI contains whitened background region (multichannel ROI I). With the 5-fold cross validation evaluation method, the CNN using multichannel ROI II achieved the ROI based area under the curve (AUC) of 0.8823+/-0.0177, compared to the AUC of 0.8484+/-0.0204 generated by the original ROI. By calculating the average of ROI scores from one nodule, the lesion based AUC using multichannel ROI was 0.8793+/-0.0210. By comparing the convolved features maps from CNN using different types of ROIs, it can be noted that multichannel ROI II contains more accurate nodule shapes and surrounding textures.

  17. Feature Transformation Detection Method with Best Spectral Band Selection Process for Hyper-spectral Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike; Brickhouse, Mark

    2015-11-01

    We present a newly developed feature transformation (FT) detection method for hyper-spectral imagery (HSI) sensors. In essence, the FT method, by transforming the original features (spectral bands) to a different feature domain, may considerably increase the statistical separation between the target and background probability density functions, and thus may significantly improve the target detection and identification performance, as evidenced by the test results in this paper. We show that by differentiating the original spectral, one can completely separate targets from the background using a single spectral band, leading to perfect detection results. In addition, we have proposed an automated best spectral band selection process with a double-threshold scheme that can rank the available spectral bands from the best to the worst for target detection. Finally, we have also proposed an automated cross-spectrum fusion process to further improve the detection performance in lower spectral range (<1000 nm) by selecting the best spectral band pair with multivariate analysis. Promising detection performance has been achieved using a small background material signature library for concept-proving, and has then been further evaluated and verified using a real background HSI scene collected by a HYDICE sensor.

  18. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  19. Human ear detection in the thermal infrared spectrum

    NASA Astrophysics Data System (ADS)

    Abaza, Ayman; Bourlai, Thirimachos

    2012-06-01

    In this paper the problem of human ear detection in the thermal infrared (IR) spectrum is studied in order to illustrate the advantages and limitations of the most important steps of ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible and thermal profile face images. The thermal data was collected using a high definition middle-wave infrared (3-5 microns) camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based ear detection method is developed for real-time segmentation of human ears in either day or night time environments. The proposed method is based on Haar features forming a cascaded AdaBoost classifier (our modified version of the original Viola-Jones approach1 that was designed to be applied mainly in visible band images). The main advantage of the proposed method, applied on our profile face image data set collected in the thermal-band, is that it is designed to reduce the learning time required by the original Viola-Jones method from several weeks to several hours. Unlike other approaches reported in the literature, which have been tested but not designed to operate in the thermal band, our method yields a high detection accuracy that reaches ~ 91.5%. Further analysis on our data set yielded that: (a) photometric normalization techniques do not directly improve ear detection performance. However, when using a certain photometric normalization technique (CLAHE) on falsely detected images, the detection rate improved by ~ 4%; (b) the high detection accuracy of our method did not degrade when we lowered down the original spatial resolution of thermal ear images. For example, even after using one third of the original spatial resolution (i.e. ~ 20% of the original computational time) of the thermal profile face images, the high ear detection accuracy of our method remained unaffected. This resulted also in speeding up the detection time of an ear image from 265 to 17 milliseconds per image. To the best of our knowledge this is the first time that the problem of human ear detection in the thermal band is being investigated in the open literature.

  20. System for objective assessment of image differences in digital cinema

    NASA Astrophysics Data System (ADS)

    Fliegel, Karel; Krasula, Lukáš; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek

    2014-09-01

    There is high demand for quick digitization and subsequent image restoration of archived film records. Digitization is very urgent in many cases because various invaluable pieces of cultural heritage are stored on aging media. Only selected records can be reconstructed perfectly using painstaking manual or semi-automatic procedures. This paper aims to answer the question what are the quality requirements on the restoration process in order to obtain acceptably close visual perception of the digitally restored film in comparison to the original analog film copy. This knowledge is very important to preserve the original artistic intention of the movie producers. Subjective experiment with artificially distorted images has been conducted in order to answer the question what is the visual impact of common image distortions in digital cinema. Typical color and contrast distortions were introduced and test images were presented to viewers using digital projector. Based on the outcome of this subjective evaluation a system for objective assessment of image distortions has been developed and its performance tested. The system utilizes calibrated digital single-lens reflex camera and subsequent analysis of suitable features of images captured from the projection screen. The evaluation of captured image data has been optimized in order to obtain predicted differences between the reference and distorted images while achieving high correlation with the results of subjective assessment. The system can be used to objectively determine the difference between analog film and digital cinema images on the projection screen.

  1. Global Observation Information Networking: Using the Distributed Image Spreadsheet (DISS)

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz

    1999-01-01

    The DISS and many other tools will be used to present visualizations which span the period from the original Suomi/Hasler animations of the first ATS-1 GEO weather satellite images in 1966 ....... to the latest 1999 NASA Earth Science Vision for the next 25 years. Hot off the SGI Onyx Graphics-Supercomputers are NASA's visualizations of Hurricanes Mitch, Georges, Fran and Linda. These storms have been recently featured on the covers of National Geographic, Time, Newsweek and Popular Science and used repeatedly this season on National and International network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1-min GOES images that appeared in the November BAMS.

  2. Crew Earth Observations (CEO) taken during Expedition 8

    NASA Image and Video Library

    2004-01-06

    ISS008-E-12109 (6 January 2004) --- Five year old icebergs near South Georgia Island are featured in this image photographed by an Expedition 8 crewmember onboard the International Space Station (ISS). This oblique image shows two pieces of a massive iceberg that broke off from the Antarctica Ronne Ice Shelf in October 1998. The pieces of iceberg A-38 have floated relatively close to South Georgia Island. After five years and 3 months, they are approximately 1500 nautical miles from their origin. The cloud pattern is indicative of the impact of the mountainous islands on the local wind field. At the time this image was taken, the icebergs were sheltered in the lee side of the island.

  3. Clinical and imaging features in lung torsion and description of a novel imaging sign.

    PubMed

    Hammer, Mark M; Madan, Rachna

    2018-04-01

    We set out to identify the clinical and imaging features seen in lung torsion, a rare but emergent diagnosis leading to vascular compromise of a lobe or entire lung. We retrospectively identified 10 patients with torsion who underwent chest CT. We evaluated each case for the presence of bronchial obstruction and abnormal fissure orientation. In seven patients who underwent contrast-enhanced CTs, we assessed for the presence of the antler sign, a novel sign seen on axial images demonstrating abnormal curvature of the artery and branches originating on one side. Five patients had right middle lobe (RML) torsion after right upper lobectomy, and the remaining occurred following thoracentesis, aortic surgery, or spontaneously. Chest CTs demonstrated bronchial obstruction in eight cases and presence of abnormal fissure orientation in four patients. The antler sign was present in three patients with whole-lung torsion and one patient with lobar torsion; vascular swirling was seen on 3-D images in all seven patients with contrast-enhanced CTs. Lung parenchymal imaging findings in lung torsion may be non-specific. Identification of the antler sign on contrast-enhanced chest CT, in combination with other signs such as bronchial obstruction and abnormal fissure orientation, indicates rotation of the bronchovascular pedicle. The presence of this sign should prompt further evaluation with 3-dimensional reconstructions.

  4. Machine learning-based diagnosis of melanoma using macro images.

    PubMed

    Gautam, Diwakar; Ahmed, Mushtaq; Meena, Yogesh Kumar; Ul Haq, Ahtesham

    2018-05-01

    Cancer bears a poisoning threat to human society. Melanoma, the skin cancer, originates from skin layers and penetrates deep into subcutaneous layers. There exists an extensive research in melanoma diagnosis using dermatoscopic images captured through a dermatoscope. While designing a diagnostic model for general handheld imaging systems is an emerging trend, this article proposes a computer-aided decision support system for macro images captured by a general-purpose camera. General imaging conditions are adversely affected by nonuniform illumination, which further affects the extraction of relevant information. To mitigate it, we process an image to define a smooth illumination surface using the multistage illumination compensation approach, and the infected region is extracted using the proposed multimode segmentation method. The lesion information is numerated as a feature set comprising geometry, photometry, border series, and texture measures. The redundancy in feature set is reduced using information theory methods, and a classification boundary is modeled to distinguish benign and malignant samples using support vector machine, random forest, neural network, and fast discriminative mixed-membership-based naive Bayesian classifiers. Moreover, the experimental outcome is supported by hypothesis testing and boxplot representation for classification losses. The simulation results prove the significance of the proposed model that shows an improved performance as compared with competing arts. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Development of a hybrid image processing algorithm for automatic evaluation of intramuscular fat content in beef M. longissimus dorsi.

    PubMed

    Du, Cheng-Jin; Sun, Da-Wen; Jackman, Patrick; Allen, Paul

    2008-12-01

    An automatic method for estimating the content of intramuscular fat (IMF) in beef M. longissimus dorsi (LD) was developed using a sequence of image processing algorithm. To extract IMF particles within the LD muscle from structural features of intermuscular fat surrounding the muscle, three steps of image processing algorithm were developed, i.e. bilateral filter for noise removal, kernel fuzzy c-means clustering (KFCM) for segmentation, and vector confidence connected and flood fill for IMF extraction. The technique of bilateral filtering was firstly applied to reduce the noise and enhance the contrast of the beef image. KFCM was then used to segment the filtered beef image into lean, fat, and background. The IMF was finally extracted from the original beef image by using the techniques of vector confidence connected and flood filling. The performance of the algorithm developed was verified by correlation analysis between the IMF characteristics and the percentage of chemically extractable IMF content (P<0.05). Five IMF features are very significantly correlated with the fat content (P<0.001), including count densities of middle (CDMiddle) and large (CDLarge) fat particles, area densities of middle and large fat particles, and total fat area per unit LD area. The highest coefficient is 0.852 for CDLarge.

  6. Distinction between amorphous and healed planar deformation features in shocked quartz using composite color scanning electron microscope cathodoluminescence (SEM-CL) imaging

    NASA Astrophysics Data System (ADS)

    Hamers, Maartje F.; Pennock, Gill M.; Herwegh, Marco; Drury, Martyn R.

    2016-10-01

    Planar deformation features (PDFs) in quartz are one of the most reliable and most widely used forms of evidence for hypervelocity impact. PDFs can be identified in scanning electron microscope cathodoluminescence (SEM-CL) images, but not all PDFs show the same CL behavior: there are nonluminescent and red luminescent PDFs. This study aims to explain the origin of the different CL emissions in PDFs. Focused ion beam (FIB) thin foils were prepared of specific sample locations selected in composite color SEM-CL images and were analyzed in a transmission electron microscope (TEM). The FIB preparation technique allowed a direct, often one-to-one correlation between the CL images and the defect structure observed in TEM. This correlation shows that composite color SEM-CL imaging allows distinction between amorphous PDFs on one hand and healed PDFs and basal Brazil twins on the other: nonluminescent PDFs are amorphous, while healed PDFs and basal Brazil twins are red luminescent, with a dominant emission peak at 650 nm. We suggest that the red luminescence is the result of preferential beam damage along dislocations, fluid inclusions, and twin boundaries. Furthermore, a high-pressure phase (possibly stishovite) in PDFs can be detected in color SEM-CL images by its blue luminescence.

  7. Harvesting geographic features from heterogeneous raster maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi

    2010-11-01

    Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial dataset. The road vectorization and text recognition results outperform state-of-art commercial products, and with considerably less user input. The approach in this thesis allows us to make use of the geospatial information of heterogeneous maps locked in raster format.

  8. Tree species classification using within crown localization of waveform LiDAR attributes

    NASA Astrophysics Data System (ADS)

    Blomley, Rosmarie; Hovi, Aarne; Weinmann, Martin; Hinz, Stefan; Korpela, Ilkka; Jutzi, Boris

    2017-11-01

    Since forest planning is increasingly taking an ecological, diversity-oriented perspective into account, remote sensing technologies are becoming ever more important in assessing existing resources with reduced manual effort. While the light detection and ranging (LiDAR) technology provides a good basis for predictions of tree height and biomass, tree species identification based on this type of data is particularly challenging in structurally heterogeneous forests. In this paper, we analyse existing approaches with respect to the geometrical scale of feature extraction (whole tree, within crown partitions or within laser footprint) and conclude that currently features are always extracted separately from the different scales. Since multi-scale approaches however have proven successful in other applications, we aim to utilize the within-tree-crown distribution of within-footprint signal characteristics as additional features. To do so, a spin image algorithm, originally devised for the extraction of 3D surface features in object recognition, is adapted. This algorithm relies on spinning an image plane around a defined axis, e.g. the tree stem, collecting the number of LiDAR returns or mean values of returns attributes per pixel as respective values. Based on this representation, spin image features are extracted that comprise only those components of highest variability among a given set of library trees. The relative performance and the combined improvement of these spin image features with respect to non-spatial statistical metrics of the waveform (WF) attributes are evaluated for the tree species classification of Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst.) and Silver/Downy birch (Betula pendula Roth/Betula pubescens Ehrh.) in a boreal forest environment. This evaluation is performed for two WF LiDAR datasets that differ in footprint size, pulse density at ground, laser wavelength and pulse width. Furthermore, we evaluate the robustness of the proposed method with respect to internal parameters and tree size. The results reveal, that the consideration of the crown-internal distribution of within-footprint signal characteristics captured in spin image features improves the classification results in nearly all test cases.

  9. A Subdivision-Based Representation for Vector Image Editing.

    PubMed

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  10. Acidalia Planitia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [figure removed for brevity, see original site] (Released 25 July 2002) The lineations seen in this THEMIS visible image occur in Acidalia Planitia, and create what is referred to as 'patterned ground' or 'polygonal terrain.' The lineations are fissures, or cracks in the ground and are possibly evidence that there was once subsurface ice or water in the region. On Earth, similar features occur when ice or water is removed from the subsurface. The removal of material causes the ground to slump, and the surface expression of this slumping is the presence of these fissures, which tend to align themselves along common orientations, and in some cases, into polygonal shapes. There are other hypotheses, not all of which involve liquid or frozen water, regarding the formation of patterned ground. Desiccation of wet soils on Earth forms mud cracks, which are similar in appearance to the martian features, but occur on a much smaller scale. Alternatively, oriented cracks form when lava flows cool. The cracks formed by this process would be on about the same scale as those seen in this image. The best example of polygonal terrain occurs about halfway down the image. The largest fractures, as in other places in the image, run from the lower left to the upper right of the image. In some cases, though, smaller fractures occur in other orientations, creating the polygonal terrain. Scientists have been aware of these features on the surface of Mars since the Viking era, but the THEMIS visible camera will allow scientists to map these features at higher resolution with more coverage over the high latitude regions where they are most common, perhaps giving further insight into the mechanism(s) of their formation.

  11. Retinal Origin of Direction Selectivity in the Superior Colliculus

    PubMed Central

    Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua

    2017-01-01

    Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394

  12. Offline Signature Verification Using the Discrete Radon Transform and a Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Coetzer, J.; Herbst, B. M.; du Preez, J. A.

    2004-12-01

    We developed a system that automatically authenticates offline handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM). Given the robustness of our algorithm and the fact that only global features are considered, satisfactory results are obtained. Using a database of 924 signatures from 22 writers, our system achieves an equal error rate (EER) of 18% when only high-quality forgeries (skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. These signatures were originally captured offline. Using another database of 4800 signatures from 51 writers, our system achieves an EER of 12.2% when only skilled forgeries are considered. These signatures were originally captured online and then digitally converted into static signature images. These results compare well with the results of other algorithms that consider only global features.

  13. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less

  14. Lessons Learned from OSIRIS-Rex Autonomous Navigation Using Natural Feature Tracking

    NASA Technical Reports Server (NTRS)

    Lorenz, David A.; Olds, Ryan; May, Alexander; Mario, Courtney; Perry, Mark E.; Palmer, Eric E.; Daly, Michael

    2017-01-01

    The Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (Osiris-REx) spacecraft is scheduled to launch in September, 2016 to embark on an asteroid sample return mission. It is expected to rendezvous with the asteroid, Bennu, navigate to the surface, collect a sample (July 20), and return the sample to Earth (September 23). The original mission design called for using one of two Flash Lidar units to provide autonomous navigation to the surface. Following Preliminary design and initial development of the Lidars, reliability issues with the hardware and test program prompted the project to begin development of an alternative navigation technique to be used as a backup to the Lidar. At the critical design review, Natural Feature Tracking (NFT) was added to the mission. NFT is an onboard optical navigation system that compares observed images to a set of asteroid terrain models which are rendered in real-time from a catalog stored in memory on the flight computer. Onboard knowledge of the spacecraft state is then updated by a Kalman filter using the measured residuals between the rendered reference images and the actual observed images. The asteroid terrain models used by NFT are built from a shape model generated from observations collected during earlier phases of the mission and include both terrain shape and albedo information about the asteroid surface. As a result, the success of NFT is highly dependent on selecting a set of topographic features that can be both identified during descent as well as reliably rendered using the shape model data available. During development, the OSIRIS-REx team faced significant challenges in developing a process conducive to robust operation. This was especially true for terrain models to be used as the spacecraft gets close to the asteroid and higher fidelity models are required for reliable image correlation. This paper will present some of the challenges and lessons learned from the development of the NFT system which includes not just the flight hardware and software but the development of the terrain models used to generate the onboard rendered images.

  15. Prediction models for solitary pulmonary nodules based on curvelet textural features and clinical parameters.

    PubMed

    Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua

    2013-01-01

    Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.

  16. Beyond Correlation: Do Color Features Influence Attention in Rainforest?

    PubMed Central

    Frey, Hans-Peter; Wirz, Kerstin; Willenbockel, Verena; Betz, Torsten; Schreiber, Cornell; Troscianko, Tomasz; König, Peter

    2011-01-01

    Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous. We recorded eye-movements of color-normal and deuteranope human participants freely viewing original and modified rainforest images. Eliminating red–green color information dramatically alters fixation behavior in color-normal participants. Changes in feature correlations and variability over subjects and conditions provide evidence for a causal effect of red–green color-contrast. The effects of blue–yellow contrast are much smaller. However, globally rotating hue in color space in these images reveals a mechanism analyzing color-contrast invariant of a specific axis in color space. Surprisingly, in deuteranope participants we find significantly elevated red–green contrast at fixation points, comparable to color-normal participants. Temporal analysis indicates that this is due to compensatory mechanisms acting on a slower time scale. Taken together, our results suggest that under natural conditions red–green color information contributes to overt attention at a low-level (bottom-up). Nevertheless, the results of the image modifications and deuteranope participants indicate that evaluation of color information is done in a hue-invariant fashion. PMID:21519395

  17. Alzheimer's Disease Early Diagnosis Using Manifold-Based Semi-Supervised Learning.

    PubMed

    Khajehnejad, Moein; Saatlou, Forough Habibollahi; Mohammadzade, Hoda

    2017-08-20

    Alzheimer's disease (AD) is currently ranked as the sixth leading cause of death in the United States and recent estimates indicate that the disorder may rank third, just behind heart disease and cancer, as a cause of death for older people. Clearly, predicting this disease in the early stages and preventing it from progressing is of great importance. The diagnosis of Alzheimer's disease (AD) requires a variety of medical tests, which leads to huge amounts of multivariate heterogeneous data. It can be difficult and exhausting to manually compare, visualize, and analyze this data due to the heterogeneous nature of medical tests; therefore, an efficient approach for accurate prediction of the condition of the brain through the classification of magnetic resonance imaging (MRI) images is greatly beneficial and yet very challenging. In this paper, a novel approach is proposed for the diagnosis of very early stages of AD through an efficient classification of brain MRI images, which uses label propagation in a manifold-based semi-supervised learning framework. We first apply voxel morphometry analysis to extract some of the most critical AD-related features of brain images from the original MRI volumes and also gray matter (GM) segmentation volumes. The features must capture the most discriminative properties that vary between a healthy and Alzheimer-affected brain. Next, we perform a principal component analysis (PCA)-based dimension reduction on the extracted features for faster yet sufficiently accurate analysis. To make the best use of the captured features, we present a hybrid manifold learning framework which embeds the feature vectors in a subspace. Next, using a small set of labeled training data, we apply a label propagation method in the created manifold space to predict the labels of the remaining images and classify them in the two groups of mild Alzheimer's and normal condition (MCI/NC). The accuracy of the classification using the proposed method is 93.86% for the Open Access Series of Imaging Studies (OASIS) database of MRI brain images, providing, compared to the best existing methods, a 3% lower error rate.

  18. An end to end secure CBIR over encrypted medical database.

    PubMed

    Bellafqira, Reda; Coatrieux, Gouenou; Bouslimi, Dalel; Quellec, Gwenole

    2016-08-01

    In this paper, we propose a new secure content based image retrieval (SCBIR) system adapted to the cloud framework. This solution allows a physician to retrieve images of similar content within an outsourced and encrypted image database, without decrypting them. Contrarily to actual CBIR approaches in the encrypted domain, the originality of the proposed scheme stands on the fact that the features extracted from the encrypted images are themselves encrypted. This is achieved by means of homomorphic encryption and two non-colluding servers, we however both consider as honest but curious. In that way an end to end secure CBIR process is ensured. Experimental results carried out on a diabetic retinopathy database encrypted with the Paillier cryptosystem indicate that our SCBIR achieves retrieval performance as good as if images were processed in their non-encrypted form.

  19. Visual improvement for bad handwriting based on Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  20. Diffraction and Imaging Study of Imperfections of Protein Crystals with Coherent X-rays

    NASA Technical Reports Server (NTRS)

    Hu, Z. W.; Thomas, B. R.; Chernov, A. A.; Chu, Y. S.; Lai, B.

    2004-01-01

    High angular-resolution x-ray diffraction and phase contrast x-ray imaging were combined to study defects and perfection of protein crystals. Imperfections including line defects, inclusions and other microdefects were observed in the diffraction images of a uniformly grown lysozyme crystal. The observed line defects carry distinct dislocation features running approximately along the <110> growth front and have been found to originate mostly in a central growth area and occasionally in outer growth regions. Slow dehydration led to the broadening of a fairly symmetric 4 4 0 rocking curve by a factor of approximately 2.6, which was primarily attributed to the dehydration-induced microscopic effects that are clearly shown in diffraction images. X-ray imaging and diffraction characterization of the quality of apoferritin crystals will also be discussed in the presentation.

  1. Mapping Vesta Mid-Latitude Quadrangle V-12EW: Mapping the Edge of the South Polar Structure

    NASA Astrophysics Data System (ADS)

    Hoogenboom, T.; Schenk, P.; Williams, D. A.; Hiesinger, H.; Garry, W. B.; Yingst, R.; Buczkowski, D.; McCord, T. B.; Jaumann, R.; Pieters, C. M.; Gaskell, R. W.; Neukum, G.; Schmedemann, N.; Marchi, S.; Nathues, A.; Le Corre, L.; Roatsch, T.; Preusker, F.; White, O. L.; DeSanctis, C.; Filacchione, G.; Raymond, C. A.; Russell, C. T.

    2011-12-01

    NASA's Dawn spacecraft arrived at the asteroid 4Vesta on July 15, 2011, and is now collecting imaging, spectroscopic, and elemental abundance data during its one-year orbital mission. As part of the geological analysis of the surface, a series of 15 quadrangle maps are being produced based on Framing Camera images (FC: spatial resolution: ~65 m/pixel) along with Visible & Infrared Spectrometer data (VIR: spatial resolution: ~180 m/pixel) obtained during the High-Altitude Mapping Orbit (HAMO). This poster presentation concentrates on our geologic analysis and mapping of quadrangle V-12EW. This quadrangle is dominated by the arcuate edge of the large 460+ km diameter south polar topographic feature first observed by HST (Thomas et al., 1997). Sparsely cratered, the portion of this feature covered in V-12EW is characterized by arcuate ridges and troughs forming a generalized arcuate pattern. Mapping of this terrain and the transition to areas to the north will be used to test whether this feature has an impact or other (e.g., internal) origin. We are also using FC stereo and VIR images to assess whether their are any compositional differences between this terrain and areas further to the north, and image data to evaluate the distribution and age of young impact craters within the map area. The authors acknowledge the support of the Dawn Science, Instrument and Operations Teams.

  2. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  3. Joint interpretation of geophysical data using Image Fusion techniques

    NASA Astrophysics Data System (ADS)

    Karamitrou, A.; Tsokas, G.; Petrou, M.

    2013-12-01

    Joint interpretation of geophysical data produced from different methods is a challenging area of research in a wide range of applications. In this work we apply several image fusion approaches to combine maps of electrical resistivity, electromagnetic conductivity, vertical gradient of the magnetic field, magnetic susceptibility, and ground penetrating radar reflections, in order to detect archaeological relics. We utilize data gathered from Arkansas University, with the support of the U.S. Department of Defense, through the Strategic Environmental Research and Development Program (SERDP-CS1263). The area of investigation is the Army City, situated in Riley Country of Kansas, USA. The depth of the relics is estimated about 30 cm from the surface, yet the surface indications of its existence are limited. We initially register the images from the different methods to correct from random offsets due to the use of hand-held devices during the measurement procedure. Next, we apply four different image fusion approaches to create combined images, using fusion with mean values, wavelet decomposition, curvelet transform, and curvelet transform enhancing the images along specific angles. We create seven combinations of pairs between the available geophysical datasets. The combinations are such that for every pair at least one high-resolution method (resistivity or magnetic gradiometry) is included. Our results indicate that in almost every case the method of mean values produces satisfactory fused images that corporate the majority of the features of the initial images. However, the contrast of the final image is reduced, and in some cases the averaging process nearly eliminated features that are fade in the original images. Wavelet based fusion outputs also good results, providing additional control in selecting the feature wavelength. Curvelet based fusion is proved the most effective method in most of the cases. The ability of curvelet domain to unfold the image in terms of space, wavenumber, and orientation, provides important advantages compared with the rest of the methods by allowing the incorporation of a-priori information about the orientation of the potential targets.

  4. Cross-sectional Imaging Review of Tuberous Sclerosis.

    PubMed

    Krishnan, Anant; Kaza, Ravi K; Vummidi, Dharshan R

    2016-05-01

    Tuberous sclerosis complex (TSC) is a multisystem, genetic disorder characterized by development of hamartomas in the brain, abdomen, and thorax. It results from a mutation in one of 2 tumor suppressor genes that activates the mammalian target of rapamycin pathway. This article discusses the origins of the disorder, the recently updated criteria for the diagnosis of TSC, and the cross-sectional imaging findings and recommendations for surveillance. Familiarity with the diverse radiological features facilitates diagnosis and helps in treatment planning and monitoring response to treatment of this multisystem disorder. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. The role of invasive and non-invasive procedures in diagnosing fever of unknown origin.

    PubMed

    Mete, Bilgul; Vanli, Ersin; Yemisen, Mucahit; Balkan, Ilker Inanc; Dagtekin, Hilal; Ozaras, Resat; Saltoglu, Nese; Mert, Ali; Ozturk, Recep; Tabak, Fehmi

    2012-01-01

    The etiology of fever of unknown origin has changed because of the recent advances in and widespread use of invasive and non-invasive diagnostic tools. However, undiagnosed patients still constitute a significant number. To determine the etiological distribution and role of non-invasive and invasive diagnostic tools in the diagnosis of fever of unknown origin. One hundred patients who were hospitalized between June 2001 and 2009 with a fever of unknown origin were included in this study. Clinical and laboratory data were collected from the patients' medical records retrospectively. Fifty three percent of the patients were male, with a mean age of 45 years. The etiology of fever was determined to be infectious diseases in 26, collagen vascular diseases in 38, neoplastic diseases in 14, miscellaneous in 2 and undiagnosed in 20 patients. When the etiologic distribution was analyzed over time, it was noted that the rate of infectious diseases decreased, whereas the rate of rheumatological and undiagnosed diseases relatively increased because of the advances in imaging and microbiological studies. Seventy patients had a definitive diagnosis, whereas 10 patients had a possible diagnosis. The diagnoses were established based on clinical features and non-invasive tests for 61% of the patients and diagnostic benefit was obtained for 49% of the patients undergoing invasive tests. Biopsy procedures contributed a rate of 42% to diagnoses in patients who received biopsies. Clinical features (such as detailed medical history-taking and physical examination) may contribute to diagnoses, particularly in cases of collagen vascular diseases. Imaging studies exhibit certain pathologies that guide invasive studies. Biopsy procedures contribute greatly to diagnoses, particularly for malignancies and infectious diseases that are not diagnosed by non-invasive procedures.

  6. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    PubMed

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  7. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.

  8. Characteristics of arachnoids from Magellan data

    NASA Technical Reports Server (NTRS)

    Dawson, C. B.; Crumpler, L. S.

    1993-01-01

    Current high resolution Magellan data enables more detailed geological study of arachnoids, first identified by Barsukov et al. as features characterized by a combination of radar-bright, concentric rings and radiating lineations, named 'arachnoids' on the basis of their spider and web-like appearance. Identification of arachnoids in Magellan data has been based on SAR images, in keeping with the original definition. However, there is some overlap by other workers in identification of arachnoids, corona (predominantly bright rings), and novae (predominantly radiating lineations), as all of these features share some common characteristics. Features used in this survey were chosen based on their classification as arachnoids in Head et al.'s catalog and on SAR characteristics matching Barsukov et al.'s original definition. The 259 arachnoids have been currently identified on Venus, all of which were considered in this study. Fifteen arachnoids from different regions, chosen for their 'type' characteristics and lack of deformation by other regional processes, were studied in depth, using SAR and altimetric data to map and profile these arachnoids in an attempt to better determine their geologic and altimetric characteristics and possible formation sequences.

  9. Data mining for average images in a digital hand atlas

    NASA Astrophysics Data System (ADS)

    Zhang, Aifeng; Cao, Fei; Pietka, Ewa; Liu, Brent J.; Huang, H. K.

    2004-04-01

    Bone age assessment is a procedure performed in pediatric patients to quickly evaluate parameters of maturation and growth from a left hand and wrist radiograph. Pietka and Cao have developed a Computer-aided diagnosis (CAD) method of bone age assessment based on a digital hand atlas. The aim of this paper is to extend their work by automatically select the best representative image from a group of normal children based on specific bony features that reflect skeletal maturity. The group can be of any ethnic origin and gender from one year to 18 year old in the digital atlas. This best representative image is defined as the "average" image of the group that can be augmented to Piekta and Cao's method to facilitate in the bone age assessment process.

  10. A Glimpse of Atlas

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Saturn's little moon Atlas orbits Saturn between the outer edge of the A ring and the fascinating, twisted F ring. This image just barely resolves the disk of Atlas, and also shows some of the knotted structure for which the F ring is known. Atlas is 32 kilometers (20 miles) across.

    The bright outer edge of the A ring is overexposed here, but farther down the image several bright ring features can be seen.

    The image was taken in visible light with the Cassini spacecraft narrow-angle camera on April 25, 2005, at a distance of approximately 2.4 million kilometers (1.5 million miles) from Atlas and at a Sun-Atlas-spacecraft, or phase, angle of 60 degrees. Resolution in the original image was 14 kilometers (9 miles) per pixel.

  11. Structural sensitivity of x-ray Bragg projection ptychography to domain patterns in epitaxial thin films

    DOE PAGES

    Hruszkewycz, S. O.; Zhang, Q.; Holt, M. V.; ...

    2016-10-04

    Bragg projection ptychography (BPP) is a coherent diffraction imaging technique capable of mapping the spatial distribution of the Bragg structure factor in nanostructured thin films. Here, we show that, because these images are projections, the structural sensitivity of the resulting images depends on the film thickness and the aspect ratio and orientation of the features of interest and that image interpretation depends on these factors. Lastly, we model changes in contrast in the BPP reconstructions of simulated PbTiO 3 ferroelectric thin films with meandering 180° stripe domains as a function of film thickness, discuss their origin, and comment on themore » implication of these factors on the design of BPP experiments of general nanostructured films.« less

  12. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  13. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  14. Constraints on the lithospheric structure of Venus from mechanical models and tectonic surface features

    NASA Technical Reports Server (NTRS)

    Zuber, Maria T.

    1987-01-01

    The evidence for the extensional or compressional origins of some prominent Venusian surface features disclosed by radar images is discussed. Using simple models, the hypothesis that the observed length scales (10-20 km and 100-300 km) of deformations are controlled by dominant wavelengths arising from unstable compression or extension of the Venus lithosphere is tested. The results show that the existence of tectonic features that exhibit both length scales can be explained if, at the time of deformation, the lithosphere consisted of a crust that was relatively strong near the surface and weak at its base, and an upper mantle that was stronger than or nearly comparable in strength to the upper crust.

  15. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  16. Lineaments of Texas - possible surface expressions of deep-seated phenomena. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodruff, C.M. Jr.; Caran, S.C.

    1984-04-01

    Lineaments were identified on 51 Landsat images covering Texas and parts of adjacent states in Mexico and the United States. A method of identifying lineaments was designed so that the findings would be consistent, uncomplicated, objective, and reproducible. Lineaments denoted on the Landsat images were traced onto 1:250,000-scale work maps and then rendered cartographically on maps representing each of the 51 Landsat images at a scale of 1:500,000. At this stage more than 31,000 lineaments were identified. It included significant areas outside of Texas. In preparing the final lineament map of Texas at 1:1,000,000-scale from the 1:500,000-scale maps, all featuresmore » that lay outside Texas and repetition among features perceived by individual workers were eliminated. Cultural features were checked for before reducing and cartographically fitting the mosaic of 51 individual map sheets to a single map base. Lineaments that were partly colinear but with different end points were modified into a single lineament trace with the combined length of the two or more colinear lineaments. Each lineament was checked to determine its validity according to our definition. The features were edited again to eliminate processing artifacts within the image itself, as well as representations of cultural features (fencelines, roads, and the like) and geomorphic patterns unrelated to bedrock structure. Thus the more than 31,000 lineaments originally perceived were reduced to the approximately 15,000 presented on the 1:1,000,000 map. Interpretations of the lineaments are presented.« less

  17. Magnetic resonance imaging features of esthesioneuroblastoma in three dogs and one cat.

    PubMed

    Söffler, Charlotte; Hartmann, Antje; Gorgas, Daniela; Ludewig, Eberhard; von Pückler, Kerstin; Kramer, Martin; Schmidt, Martin J

    2016-10-12

    Esthesioneuroblastoma is a rare malignant intranasal tumor that originates from the olfactory neuroepithelium of the upper nasal cavity, and can destroy the cribriform plate and expand into the neurocranium. Descriptions of the magnetic resonance features of esthesioneuroblastomas in animals are scarce. The objectives of this study were to report the magnetic resonance imaging features of esthesioneuroblastomas in order to determine distinct imaging characteristics that may help distinguish it from other intracranial tumor types. Magnetic resonance images of four patients with confirmed esthesioneuroblastomas were reviewed and compared with previously reported cases. The esthesioneuroblastomas appeared as oval-shaped, solitary lesions in the caudal nasal cavity that caused osteolysis of the cribriform plate and extended into the brain in all cases. Signal intensity was variable. Contrast enhancement was mild and varied from homogeneous to heterogeneous. A peripheral cystic component was found in two patients and was reported in only one previous case. Mass effect and white matter edema were marked to severe. Osteolysis of facial bones and extension into the facial soft tissues or retrobulbar space were not present in any of the cases, although this has been reported in the literature. A definitive diagnosis of esthesioneuroblastoma based on signal intensity or contrast behavior was not possible. Nevertheless, the presence of a mass in the caudal nasal cavity with extension into the neurocranium seems to be a feature highly suspicious of esthesioneuroblastoma. In contrast to other extra-cranial lesions, the extra-cranial mass was relatively small and destruction of facial bones seems to be rare.

  18. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features canmore » be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI-LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the dice similarity coefficient to measure the overlap degree. Their method achieves average overlaps of 82.56% on 54 regions of interest (ROIs) and 79.78% on 80 ROIs, respectively, which significantly outperform the baseline method (random forests), with the average overlaps of 72.48% on 54 ROIs and 72.09% on 80 ROIs, respectively. Conclusions: The proposed methods have achieved the highest labeling accuracy, compared to several state-of-the-art methods in the literature.« less

  19. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2016-01-01

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the dice similarity coefficient to measure the overlap degree. Their method achieves average overlaps of 82.56% on 54 regions of interest (ROIs) and 79.78% on 80 ROIs, respectively, which significantly outperform the baseline method (random forests), with the average overlaps of 72.48% on 54 ROIs and 72.09% on 80 ROIs, respectively. Conclusions: The proposed methods have achieved the highest labeling accuracy, compared to several state-of-the-art methods in the literature. PMID:26843260

  20. Detection and classification of Breast Cancer in Wavelet Sub-bands of Fractal Segmented Cancerous Zones.

    PubMed

    Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri

    2015-01-01

    Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and utilized in radial basis function neural network. Our simulation results indicate the accuracy of 92% classification using F1W2 method.

  1. Effects of spatial resolution ratio in image fusion

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2008-01-01

    In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.

  2. Multi-view 3D echocardiography compounding based on feature consistency

    NASA Astrophysics Data System (ADS)

    Yao, Cheng; Simpson, John M.; Schaeffter, Tobias; Penney, Graeme P.

    2011-09-01

    Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.

  3. Phoenix Trenches

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Annotated Version

    [figure removed for brevity, see original site] Left-eye view of a stereo pair [figure removed for brevity, see original site] Right-eye view of a stereo pair

    This image is a stereo, panoramic view of various trenches dug by NASA's Phoenix Mars Lander. The images that make up this panorama were taken by Phoenix's Surface Stereo Imager at about 4 p.m., local solar time at the landing site, on the 131st, Martian day, or sol, of the mission (Oct. 7, 2008).

    In figure 1, the trenches are labeled in orange and other features are labeled in blue. Figures 2 and 3 are the left- and right-eye members of a stereo pair.

    For scale, the 'Pet Donkey' trench just to the right of center is approximately 38 centimeters (15 inches) long and 31 to 34 centimeters (12 to 13 inches) wide. In addition, the rock in front of it, 'Headless,' is about 11.5 by 8.5 centimeters (4.5 by 3.3 inches), and about 5 centimeters (2 inches) tall.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  4. Incompletely characterized incidental renal masses: emerging data support conservative management.

    PubMed

    Silverman, Stuart G; Israel, Gary M; Trinh, Quoc-Dien

    2015-04-01

    With imaging, most incidental renal masses can be diagnosed promptly and with confidence as being either benign or malignant. For those that cannot, management recommendations can be devised on the basis of a thorough evaluation of imaging features. However, most renal masses are either too small to characterize completely or are detected initially in imaging examinations that are not designed for full evaluation of them. These masses constitute a group of masses that are considered incompletely characterized. On the basis of current published guidelines, many masses warrant additional imaging. However, while the diagnosis of renal cancer at a curable stage remains the first priority, there is the additional need to reduce unnecessary healthcare costs and radiation exposure. As such, emerging data now support foregoing additional imaging for many incompletely characterized renal masses. These data include the low risk of progression to metastases or death for small renal masses that have undergone active surveillance (including biopsy-proven cancers) and a better understanding of how specific imaging features can be used to diagnose their origins. These developments support (a) avoidance of imaging entirely for those incompletely characterized renal masses that are highly likely to be benign cysts and (b) delay of further imaging of small solid masses in selected patients. Although more evidence-based data are needed and comprehensive management algorithms have yet to be defined, these recommendations are medically appropriate and practical, while limiting the imaging of many incompletely characterized incidental renal masses.

  5. SAR image classification based on CNN in real and simulation datasets

    NASA Astrophysics Data System (ADS)

    Peng, Lijiang; Liu, Ming; Liu, Xiaohua; Dong, Liquan; Hui, Mei; Zhao, Yuejin

    2018-04-01

    Convolution neural network (CNN) has made great success in image classification tasks. Even in the field of synthetic aperture radar automatic target recognition (SAR-ATR), state-of-art results has been obtained by learning deep representation of features on the MSTAR benchmark. However, the raw data of MSTAR have shortcomings in training a SAR-ATR model because of high similarity in background among the SAR images of each kind. This indicates that the CNN would learn the hierarchies of features of backgrounds as well as the targets. To validate the influence of the background, some other SAR images datasets have been made which contains the simulation SAR images of 10 manufactured targets such as tank and fighter aircraft, and the backgrounds of simulation SAR images are sampled from the whole original MSTAR data. The simulation datasets contain the dataset that the backgrounds of each kind images correspond to the one kind of backgrounds of MSTAR targets or clutters and the dataset that each image shares the random background of whole MSTAR targets or clutters. In addition, mixed datasets of MSTAR and simulation datasets had been made to use in the experiments. The CNN architecture proposed in this paper are trained on all datasets mentioned above. The experimental results shows that the architecture can get high performances on all datasets even the backgrounds of the images are miscellaneous, which indicates the architecture can learn a good representation of the targets even though the drastic changes on background.

  6. The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector

    DOE PAGES

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...

    2014-06-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less

  7. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    PubMed

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Multiple beam interference confocal microscopy: a tool for morphological investigation of living cells and tissues

    NASA Astrophysics Data System (ADS)

    Joshi, Narahari V.; Medina, Honorio

    2000-05-01

    Multiple beam interference system is used in conjunction with a conventional scanning confocal microscope to examine the morphology and construction of 3D images of Histolytic Ameba and parasite Candida Albicans. The present combination permits to adjoin advantages of both systems, namely the vertical high contrast and optical sectioning. The interference pattern obtained from a multiple internal reflection of a simple, sandwiched between the glass plate and the cover plate, was focussed on an objective of a scanning confocal microscope. According to optical path differences, morphological details were revealed. The combined features, namely improved resolution in z axis, originated from the interference pattern and the optical sectioning of the confocal scanning system, enhance the resolution and contrast dramatically. These features permitted to obtain unprecedented images of Histolytic Ameba and parasite Candida Albicans. Because of the improved contrast, several details like double wall structure of candida, internal structure of ameba are clearly visible.

  9. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  10. Differential diagnosis of CT focal liver lesions using texture features, feature selection and ensemble driven classifiers.

    PubMed

    Mougiakakou, Stavroula G; Valavanis, Ioannis K; Nikita, Alexandra; Nikita, Konstantina S

    2007-09-01

    The aim of the present study is to define an optimally performing computer-aided diagnosis (CAD) architecture for the classification of liver tissue from non-enhanced computed tomography (CT) images into normal liver (C1), hepatic cyst (C2), hemangioma (C3), and hepatocellular carcinoma (C4). To this end, various CAD architectures, based on texture features and ensembles of classifiers (ECs), are comparatively assessed. Number of regions of interests (ROIs) corresponding to C1-C4 have been defined by experienced radiologists in non-enhanced liver CT images. For each ROI, five distinct sets of texture features were extracted using first order statistics, spatial gray level dependence matrix, gray level difference method, Laws' texture energy measures, and fractal dimension measurements. Two different ECs were constructed and compared. The first one consists of five multilayer perceptron neural networks (NNs), each using as input one of the computed texture feature sets or its reduced version after genetic algorithm-based feature selection. The second EC comprised five different primary classifiers, namely one multilayer perceptron NN, one probabilistic NN, and three k-nearest neighbor classifiers, each fed with the combination of the five texture feature sets or their reduced versions. The final decision of each EC was extracted by using appropriate voting schemes, while bootstrap re-sampling was utilized in order to estimate the generalization ability of the CAD architectures based on the available relatively small-sized data set. The best mean classification accuracy (84.96%) is achieved by the second EC using a fused feature set, and the weighted voting scheme. The fused feature set was obtained after appropriate feature selection applied to specific subsets of the original feature set. The comparative assessment of the various CAD architectures shows that combining three types of classifiers with a voting scheme, fed with identical feature sets obtained after appropriate feature selection and fusion, may result in an accurate system able to assist differential diagnosis of focal liver lesions from non-enhanced CT images.

  11. Io's Sodium Cloud On-Chip Format (Clear and Green-Yellow Filters Superimposed)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This image of Jupiter's moon Io and its surrounding sky is shown in false color. The solid state imaging (CCD) system on NASA's Galileo spacecraft originally took two images of this scene, one through a clear filter and one through a green-yellow filter. [Versions of these images have been released over the past 3 days.] This picture was created by: (i) adding green color to the image taken through the green-yellow filter, and red color to the image taken through the clear filter; (ii) superimposing the two resulting images. Thus features in this picture which are purely green (or purely red) originally appeared only in the green-yellow (or clear) filter image of this scene. Features which are yellowish appeared in both filters. North is at the top, and east is to the right.

    This image reveals several new things about this scene. For example:

    (1) The reddish emission south of Io came dominantly through the clear filter. It therefore probably represents scattered light from Io's lit crescent and Prometheus' plume, rather than emission from Io's Sodium Cloud (which came through both filters).

    (2) The roundish red spot in Io's southern hemisphere contains a small yellow spot. This means that some thermal emission from the volcano Pele was detected by the green-yellow filter (as well as by the clear filter).

    (3) The sky contains several concentrated yellowish spots which were thus seen at the same location on the sky through both filters (one such spot appears in the picture's northeast corner). These spots are almost certainly stars. By contrast, the eastern half of this image contains a number of green spots whose emission was thus detected by the green-yellow filter only. Since any star visible through the green-yellow filter would also be visible through the clear filter, these green spots are probably artifacts (e.g., cosmic ray hits on the CCD sensor).

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.

  12. X-Ray Diffraction and Imaging Study of Imperfections of Crystallized Lysozyme with Coherent X-Rays

    NASA Technical Reports Server (NTRS)

    Hu, Zheng-Wei; Chu, Y. S.; Lai, B.; Cai, Z.; Thomas, B. R.; Chernov, A. A.

    2003-01-01

    Phase-sensitive x-ray diffraction imaging and high angular-resolution diffraction combined with phase contrast radiographic imaging are employed to characterize defects and perfection of a uniformly grown tetragonal lysozyme crystal in symmetric Laue case. The fill width at half-maximum (FWHM) of a 4 4 0 rocking curve measured from the original crystal is approximately 16.7 arcseconds, and defects, which include point defects, line defects, and microscopic domains, have been clearly observed in the diffraction images of the crystal. The observed line defects carry distinct dislocation features running approximately along the <110> growth front, and they have been found to originate mostly at a central growth area and occasionally at outer growth regions. Individual point defects trapped at a crystal nucleus are resolved in the images of high sensitivity to defects. Slow dehydration has led to the broadening of the 4 4 0 rocking curve by a factor of approximately 2.4. A significant change of the defect structure and configuration with drying has been revealed, which suggests the dehydration induced migration and evolution of dislocations and lattice rearrangements to reduce overall strain energy. The sufficient details of the observed defects shed light upon perfection, nucleation and growth, and properties of protein crystals.

  13. Investigating Mars: Ius Chasma

    NASA Image and Video Library

    2018-02-28

    This VIS image shows the eastern end of Ius Chasma. The southern canyon wall is at the bottom of the image, with dark sand and sand dunes. The presence of mobile sand indicates that winds are eroding, depositing and changing the canyon floor. The rest of the image is dominated by large landslide deposits. At the top of the image are two overlapping deposits from landslides originating on the northern chasma wall. The landslide deposit on the left side of the image originate from the southern chasma wall. A landslide is a failure of slope due to gravity. They initiate due to several reasons. A lower layer of poorly cemented/resistant material may have been eroded, undermining the wall above which then collapses; earthquake seismic waves can cause the slope to collapse; and even an impact event near the canyon wall can cause collapse. As millions of tons of material fall and slide down slope a scalloped cavity forms at the upper part where the slope failure occurred. At the material speeds downhill it will pick up more of the underlying slope, increasing the volume of material entrained into the landslide. Whereas some landslides spread across the canyon floor forming lobate deposits, very large volume slope failures will completely fill the canyon floor in a large complex region of chaotic blocks. Ius Chasma is at the western end of Valles Marineris, south of Tithonium Chasma. Valles Marineris is over 4000 kilometers long, wider than the United States. Ius Chasma is almost 850 kilometers long (528 miles), 120 kilometers wide and over 8 kilometers deep. In comparison, the Grand Canyon in Arizona is about 175 kilometers long, 30 kilometers wide, and only 2 kilometers deep. The canyons of Valles Marineris were formed by extensive fracturing and pulling apart of the crust during the uplift of the vast Tharsis plateau. Landslides have enlarged the canyon walls and created deposits on the canyon floor. Weathering of the surface and influx of dust and sand have modified the canyon floor, both creating and modifying layered materials. There are many features that indicate flowing and standing water played a part in the chasma formation. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 71,000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 36744 Latitude: -8.64709 Longitude: 282.235 Instrument: VIS Captured: 2010-03-27 18:32 https://photojournal.jpl.nasa.gov/catalog/PIA22285

  14. NASA Releases 'NASA App HD' for iPad

    NASA Image and Video Library

    2012-07-06

    The NASA App HD invites you to discover a wealth of NASA information right on your iPad. The application collects, customizes and delivers an extensive selection of dynamically updated mission information, images, videos and Twitter feeds from various online NASA sources in a convenient mobile package. Come explore with NASA, now on your iPad. 2012 Updated Version - HD Resolution and new features. Original version published on Sept. 1, 2010.

  15. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  16. Multi-Wavelength Views of Messier 81

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site] Click on individual images below for larger view

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site]

    The magnificent spiral arms of the nearby galaxy Messier 81 are highlighted in this image from NASA's Spitzer Space Telescope. Located in the northern constellation of Ursa Major (which also includes the Big Dipper), this galaxy is easily visible through binoculars or a small telescope. M81 is located at a distance of 12 million light-years.

    The main image is a composite mosaic obtained with the multiband imaging photometer for Spitzer and the infrared array camera. Thermal infrared emission at 24 microns detected by the photometer (red, bottom left inset) is combined with camera data at 8.0 microns (green, bottom center inset) and 3.6 microns (blue, bottom right inset).

    A visible-light image of Messier 81, obtained at Kitt Peak National Observatory, a ground-based telescope, is shown in the upper right inset. Both the visible-light picture and the 3.6-micron near-infrared image trace the distribution of stars, although the Spitzer image is virtually unaffected by obscuring dust. Both images reveal a very smooth stellar mass distribution, with the spiral arms relatively subdued.

    As one moves to longer wavelengths, the spiral arms become the dominant feature of the galaxy. The 8-micron emission is dominated by infrared light radiated by hot dust that has been heated by nearby luminous stars. Dust in the galaxy is bathed by ultraviolet and visible light from nearby stars. Upon absorbing an ultraviolet or visible-light photon, a dust grain is heated and re-emits the energy at longer infrared wavelengths. The dust particles are composed of silicates (chemically similar to beach sand), carbonaceous grains and polycyclic aromatic hydrocarbons and trace the gas distribution in the galaxy. The well-mixed gas (which is best detected at radio wavelengths) and dust provide a reservoir of raw materials for future star formation.

    The 24-micron multiband imaging photometer image shows emission from warm dust heated by the most luminous young stars. The infrared-bright clumpy knots within the spiral arms show where massive stars are being born in giant H II (ionized hydrogen) regions. Studying the locations of these star forming regions with respect to the overall mass distribution and other constituents of the galaxy (e.g., gas) will help identify the conditions and processes needed for star formation.

  17. Direct fusion of geostationary meteorological satellite visible and infrared images based on thermal physical properties.

    PubMed

    Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing

    2015-01-05

    This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.

  18. Direct Fusion of Geostationary Meteorological Satellite Visible and Infrared Images Based on Thermal Physical Properties

    PubMed Central

    Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing

    2015-01-01

    This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749

  19. Geologic Mapping of Ejecta Deposits in Oppia Quadrangle, Asteroid (4) Vesta

    NASA Technical Reports Server (NTRS)

    Garry, W. Brent; Williams, David A.; Yingst, R. Aileen; Mest, Scott C.; Buczkowski, Debra L.; Tosi, Federico; Schafer, Michael; LeCorre, Lucille; Reddy, Vishnu; Jaumann, Ralf; hide

    2014-01-01

    Oppia Quadrangle Av-10 (288-360 deg E, +/- 22 deg) is a junction of key geologic features that preserve a rough history of Asteroid (4) Vesta and serves as a case study of using geologic mapping to define a relative geologic timescale. Clear filter images, stereo-derived topography, slope maps, and multispectral color-ratio images from the Framing Camera on NASA's Dawn spacecraft served as basemaps to create a geologic map and investigate the spatial and temporal relationships of the local stratigraphy. Geologic mapping reveals the oldest map unit within Av-10 is the cratered highlands terrain which possibly represents original crustal material on Vesta that was then excavated by one or more impacts to form the basin Feralia Planitia. Saturnalia Fossae and Divalia Fossae ridge and trough terrains intersect the wall of Feralia Planitia indicating that this impact basin is older than both the Veneneia and Rheasilvia impact structures, representing Pre-Veneneian crustal material. Two of the youngest geologic features in Av-10 are Lepida (approximately 45 km diameter) and Oppia (approximately 40 km diameter) impact craters that formed on the northern and southern wall of Feralia Planitia and each cross-cuts a trough terrain. The ejecta blanket of Oppia is mapped as 'dark mantle' material because it appears dark orange in the Framing Camera 'Clementine-type' colorratio image and has a diffuse, gradational contact distributed to the south across the rim of Rheasilvia. Mapping of surface material that appears light orange in color in the Framing Camera 'Clementine-type' color-ratio image as 'light mantle material' supports previous interpretations of an impact ejecta origin. Some light mantle deposits are easily traced to nearby source craters, but other deposits may represent distal ejecta deposits (emplaced greater than 5 crater radii away) in a microgravity environment.

  20. Confocal Endomicroscopy: Instrumentation and Medical Applications

    PubMed Central

    Jabbour, Joey M.; Saldua, Meagan A.; Bixler, Joel N.; Maitland, Kristen C.

    2013-01-01

    Advances in fiber optic technology and miniaturized optics and mechanics have propelled confocal endomicroscopy into the clinical realm. This high resolution, non-invasive imaging technology provides the ability to microscopically evaluate cellular and sub-cellular features in tissue in vivo by optical sectioning. Because many cancers originate in epithelial tissues accessible by endoscopes, confocal endomicroscopy has been explored to detect regions of possible neoplasia at an earlier stage by imaging morphological features in vivo that are significant in histopathologic evaluation. This technique allows real-time assessment of tissue which may improve diagnostic yield by guiding biopsy. Research and development continues to reduce the overall size of the imaging probe, increase the image acquisition speed, and improve resolution and field of view of confocal endomicroscopes. Technical advances will continue to enable application to less accessible organs and more complex systems in the body. Lateral and axial resolutions down to 0.5 μm and 3 μm, respectively, field of view as large as 800×450 μm, and objective lens and total probe outer diameters down to 350 μm and 1.25 mm, respectively, have been achieved. We provide a review of the historical developments of confocal imaging in vivo, the evolution of endomicroscope instrumentation, and the medical applications of confocal endomicroscopy. PMID:21994069

  1. Juling Crater

    NASA Image and Video Library

    2017-08-25

    This high-resolution image of Juling Crater on Ceres reveals, in exquisite detail, features on the rims and crater floor. The crater is about 1.6 miles (2.5 kilometers) deep and the small mountain, seen left of the center of the crater, is about 0.6 miles (1 kilometers) high. The many features indicative of the flow of material suggest the subsurface is rich in ice. The geological structure of this region also generally suggests that ice is involved. The origin of the small depression seen at the top of the mountain is not fully understood but might have formed as a consequence of a landslide, visible on the northeastern flank. Dawn took this image during its extended mission on August 25, 2016, from its low-altitude mapping orbit at a distance of about 240 miles (385 kilometers) above the surface. The center coordinates of this image are 36 degrees south latitude, 167 degrees east longitude. Juling is named after the Sakai/Orang Asli spirit of the crops from Malaysia. NASA's Dawn spacecraft acquired this picture on August 24, 2016. The image was taken during Dawn's extended mission, from its low altitude mapping orbit at about 240 miles (385 kilometers) above the surface. The center coordinates of this image are 38 degrees south latitude, 165 degrees east longitude. https://photojournal.jpl.nasa.gov/catalog/PIA21754

  2. 'Gibson' Panorama by Spirit at 'Home Plate'

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Click on the image for 'Gibson' Panorama by Spirit at 'Home Plate' (QTVR)

    NASA's Mars Exploration Rover Spirit acquired this high-resolution view of intricately layered exposures of rock while parked on the northwest edge of the bright, semi-circular feature known as 'Home Plate.' The rover was perched at a 27-degree upward tilt while creating the panorama, resulting in the 'U' shape of the mosaic. In reality, the features along the 1-meter to 2-meter (1-foot to 6.5-foot) vertical exposure of the rim of Home Plate in this vicinity are relatively level. Rocks near the rover in this view, known as the 'Gibson' panorama, include 'Barnhill,' 'Rogan,' and 'Mackey.'

    Spirit acquired 246 separate images of this scene using 6 different filters on the panoramic camera (Pancam) during the rover's Martian days, or sols, 748 through 751 (Feb. 9 through Feb. 12, 2006). The field of view covers 160 degrees of terrain around the rover. This image is an approximately true-color rendering using Pancam's 753-nanometer, 535-namometer, and 432-nanometer filters. Image-to-image seams have been eliminated from the sky portion of the mosaic to better simulate the vista a person standing on Mars would see.

  3. Remote Sensing Image Analysis Without Expert Knowledge - A Web-Based Classification Tool On Top of Taverna Workflow Management System

    NASA Astrophysics Data System (ADS)

    Selsam, Peter; Schwartze, Christian

    2016-10-01

    Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.

  4. Frankenstein Galaxy

    NASA Image and Video Library

    2016-07-11

    The galaxy UGC 1382 has been revealed to be far larger and stranger than previously thought. Astronomers relied on a combination of ground-based and space telescopes to uncover the true nature of this "Frankenstein galaxy." The composite image shows the same galaxy as viewed with different instruments. The component images are also available. In the image at left, UGC 1382 appears to be a simple elliptical galaxy, based on optical data from the Sloan Digital Sky Survey (SDSS). But spiral arms emerged when astronomers incorporated ultraviolet data from the Galaxy Evolution Explorer (GALEX) and deep optical data from SDSS, as seen in the middle image. Combining that with a view of low-density hydrogen gas (shown in green), detected at radio wavelengths by the Very Large Array, scientists discovered that UGC 1382 is a giant, and one of the largest isolated galaxies known. GALEX in particular was able detect very faint features because it operated from space, which is necessary for UV observations because ultraviolet light is absorbed by the Earth's atmosphere. Astronomers also used Stripe 82 of SDSS, a small region of sky where SDSS imaged the sky 80 times longer than the original standard SDSS survey. This enabled optical detection of much fainter features as well. http://photojournal.jpl.nasa.gov/catalog/PIA20695

  5. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems. PMID:24964954

  6. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects.

    PubMed

    Kloster, Michael; Kauer, Gerhard; Beszteri, Bánk

    2014-06-25

    Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.

  7. Geomorphic evidence for an eolian contribution to the formation of the Martian northern plains

    NASA Technical Reports Server (NTRS)

    Zimbelman, J. R.

    1993-01-01

    The northern plains of Mars have many morphologic characteristics that are uncommon or absent on the rest of the planet. Mariner 9 and Viking images obtained north of latitude 30 deg N revealed 'smooth' and 'mottled' plains of an uncertain origin. Some or all of the northern plains were interpreted to consist of lava plains intermixed with eolian and volcanic materials thick eolian mantles that buried portions of the mid latitudes periglacial deposits resulting from the presence of ground ice and as water-transported sediments derived from fluvial runoff, lacustrine deposition in standing bodies of water, or glacial runoff. The highest-resolution Viking images show many intriguing details that may provide clues to the origin of this complex and distinctive terrain. Some of the informative features present in the best Viking images, comparing the observations to what may be expected from various hypotheses of formation, are reviewed. While the results are not conclusive for any single hypothesis, eolian processes have played a major role in the erosion (and possibly deposition) of the materials that make up the surface exposures in the Martian northern plains.

  8. Submolecular resolution in scanning probe images of Sn-phthalocyanines on Cu(1 0 0) using metal tips

    NASA Astrophysics Data System (ADS)

    Buchmann, Kristof; Hauptmann, Nadine; Foster, Adam S.; Berndt, Richard

    2017-10-01

    Single Sn-phthalocyanine (SnPc) molecules adsorb on Cu(1 0 0) with the Sn ion above (Sn-up) or below (Sn-down) the molecular plane. Here we use a combination of atomic force microscopy (AFM), scanning tunnelling microscopy (STM) and first principles calculations to understand the adsorption configuration and origin of observed contrast of molecules in the Sn-down state. AFM with metallic tips images the pyrrole nitrogen atoms in these molecules as attractive features while STM reveals a chirality of the electronic structure of the molecules close to the Fermi level E_F, which is not observed in AFM. Using density functional theory calculations, the origin of the submolecular contrast is analysed and, while the electrostatic forces turn out to be negligible, the van der Waals interaction between the phenyl rings of SnPc and the substrate deform the molecule, push the pyrrole nitrogen atoms away from the substrate and thus induce the observed submolecular contrast. Simulated STM images reproduce the chirality of the electronic structure near E_F.

  9. Characteristics of Neovascularization in Early Stages of Proliferative Diabetic Retinopathy by Optical Coherence Tomography Angiography.

    PubMed

    Pan, Jiandong; Chen, Ding; Yang, Xiaoling; Zou, Ruitao; Zhao, Kuo; Cheng, Dan; Huang, Shenghai; Zhou, Tingye; Yang, Ye; Chen, Feng

    2018-05-25

    To classify retinal neovascularization in untreated early stages of proliferative diabetic retinopathy (PDR) based on optical coherence tomography angiography (OCTA). A cross-sectional study. Thirty-five eyes were included. They underwent color fundus photography, fluorescein angiography (FA), and OCTA examinations. Neovascularizations elsewhere (NVEs), neovascularizations of the optic disc (NVDs), and intraretinal microvascular abnormalities (IRMAs) were scanned by OCTA. The origin and morphology of NVE/NVD/IRMA on OCTA were evaluated. Retinal nonperfusion areas (NPAs) were measured using Image J software. In 35 eyes successfully imaged, 75 NVEs, 35 NVDs and 12 IRMAs were captured. Three proposed subtypes of NVE were indentified based on the origins and morphological features. Type 1 (32 of 75, 42.67%) originated from venous, in a tree-like shape. Type 2 (30 of 75, 40.00%) originated from capillary networks, with an octopus-like appearance. Type 3 (13 of 75, 17.33%) originated from the IRMAs, having a sea fan shape. NVD originated from the retinal artery, the retinal vein, or from the choroid, and arose from the bending vessels near the rim of the optic disc. IRMA originated from and drained into retinal venules, extending in the retina. The initial layer and affiliated NPA were significantly different in the 3 subtypes of NVEs (all P < 0.01). OCTA allowed identification of the origins and morphological patterns of neovascularization in PDR. The new classification of retinal neovascularization may be useful to better understand pathophysiological mechanisms and to guide efficient therapeutic strategies. Copyright © 2018. Published by Elsevier Inc.

  10. 'Blueberry' Layers Indicate Watery Origins

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This microscopic image, taken at the outcrop region dubbed 'El Capitan' near the Mars Exploration Rover Opportunity's landing site, reveals millimeter-scale (.04 inch-scale) layers in the lower portion. This same layering is hinted at by the fine notches that run horizontally across the sphere-like grain or 'blueberry' in the center left. The thin layers do not appear to deform around the blueberry, indicating that these geologic features are concretions and not impact spherules or ejected volcanic material called lapilli. Concretions are balls of minerals that form in pre-existing wet sediments. This image was taken by the rover's microscopic imager on the 29th martian day, or sol, of its mission. The observed area is about 3 centimeters (1.2 inches) across.

  11. The magic of image processing

    NASA Astrophysics Data System (ADS)

    Sulentic, Jack W.; Lorre, Jean J.

    1984-05-01

    Digital technology has been used to improve enhancement techniques in astronomical image processing. Continuous tone variations in photographs are assigned density number (DN) values which are arranged in an array. DN locations are processed by computer and turned into pixels which form a reconstruction of the original scene on a television monitor. Digitized data can be manipulated to enhance contrast and filter out gross patterns of light and dark which obscure small scale features. Separate black and white frames exposed at different wavelengths can be digitized and processed individually, then recombined to produce a final image in color. Several examples of the use of the technique are provided, including photographs of spiral galaxy M33; four galaxies in Coma Berenices (NGC 4169, 4173, 4174, and 4175); and Stephens Quintet.

  12. Investigating Mars: Pavonis Mons

    NASA Image and Video Library

    2017-10-31

    This image shows part of the western flank of Pavonis Mons. The linear features are faults. Faulting usually includes change of elevation, where blocks of material slide down the fault. Paired faults are call graben. The large depression is a graben, whereas most of the other faults are not paired. The rougher looking materials perpendicular to the faults are lava flows. "Down hill" is to the upper left corner of the image. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 14857 Latitude: 1.4859 Longitude: 245.996 Instrument: VIS Captured: 2005-04-20 17:00 https://photojournal.jpl.nasa.gov/catalog/PIA22017

  13. Generalized procrustean image deformation for subtraction of mammograms

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Zheng, Bin; Chang, Yuan-Hsiang; Wang, Xiao Hui; Maitz, Glenn S.

    1999-05-01

    This project is a preliminary evaluation of two simple fully automatic nonlinear transformations which can map any mammographic image onto a reference image while guaranteeing registration of specific features. The first method automatically identifies skin lines, after which each pixel is given coordinates in the range [0,1] X [0,1], where the actual value of a coordinate is the fractional distance of the pixel between tissue boundaries in either the horizontal or vertical direction. This insures that skin lines are put in registration. The second method, which is the method of primary interest, automatically detects pectoral muscles, skin lines and nipple locations. For each image, a polar coordinate system is established with its origin at the intersection of the nipple axes line (NAL) and a line indicating the pectoral muscle. Points within a mammogram are identified by the angle of their position vector, relative to the NAL, and by their fractional distance between the origin and the skin line. This deforms mammograms in such a way that their pectoral lines, NALs and skin lines are all in registration. After images are deformed, their grayscales are adjusted by applying linear regression to pixel value pairs for corresponding tissue pixels. In a comparison of these methods to a previously reported 'translation/rotation' technique, evaluation of difference images clearly indicates that the polar coordinates method results in the most accurate registration of the transformations considered.

  14. [Perception of odor quality by Free Image-Association Test].

    PubMed

    Ueno, Y

    1992-10-01

    A method was devised for evaluating odor quality. Subjects were requested to freely describe the images elicited by smelling odors. This test was named the "Free Image-Association Test (FIT)". The test was applied for 20 flavors of various foods, five odors from the standards of T&T olfactometer (Japanese standard olfactory test), butter of yak milk, and incense from Lamaism temples. The words for expressing imagery were analyzed by multidimensional scaling and cluster analysis. Seven clusters of odors were obtained. The feature of these clusters were quite similar to that of primary odors which have been suggested by previous studies. However, the clustering of odors can not be explained on the basis of the primary-odor theory, but the information processing theory originally proposed by Miller (1956). These results support the usefulness of the Free Image-Association Test for investigating odor perception based on the images associated with odors.

  15. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  16. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  17. Hot spots of Io

    NASA Technical Reports Server (NTRS)

    Pearl, J. C.; Sinton, W. M.

    1982-01-01

    The size and temperature, morphology and distribution, variability, possible absorption features, and processes of hot spots on Io are discussed, and an estimate of the global heat flux is made. Size and temperature information is deconvolved to obtain equivalent radius and temperature of hot spots, and simultaneously obtained Voyager thermal and imaging data is used to match hot sources with specific geologic features. In addition to their thermal output, it is possible that hot spots are also characterized by production of various gases and particulate materials; the spectral signature of SO2 has been seen. Origins for relatively stable, low temperature sources, transient high temperature sources, and relatively stable, high-tmperature sources are discussed.

  18. The Variability of Transverse Aeolian Ripples in Troughs on Mars

    NASA Technical Reports Server (NTRS)

    Bourke, M. C.; Wilson, S.A.; Zimbelman, J. R.

    2003-01-01

    A precursory glance at MGS images of the surface of Mars show an abundance of aeolian transverse ridges. These ridges are located in a variety of geological terrains. Zimbelman and Wilson have separated the small-scale aeolian features of Syrtis Major into six categories: ripples associated with obstacles, ripple bands, ripple fields, ripple patches, isolated ripple patches and ripples associated with dunes. This paper focuses on one of these categories, that of ripple bands which tend to accumulate within linear troughs. As the origin of these features is still being studied (i.e. ripples versus dunes), we refer to them simply as transverse aeolian ridges.

  19. Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zhang, J.; Zhao, Z.

    2018-04-01

    Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.

  20. Does skull shape mediate the relationship between objective features and subjective impressions about the face?

    PubMed

    Marečková, Klára; Chakravarty, M Mallar; Huang, Mei; Lawrence, Claire; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš

    2013-10-01

    In our previous work, we described facial features associated with a successful recognition of the sex of the face (Marečková et al., 2011). These features were based on landmarks placed on the surface of faces reconstructed from magnetic resonance (MR) images; their position was therefore influenced by both soft tissue (fat and muscle) and bone structure of the skull. Here, we ask whether bone structure has dissociable influences on observers' identification of the sex of the face. To answer this question, we used a novel method of studying skull morphology using MR images and explored the relationship between skull features, facial features, and sex recognition in a large sample of adolescents (n=876; including 475 adolescents from our original report). To determine whether skull features mediate the relationship between facial features and identification accuracy, we performed mediation analysis using bootstrapping. In males, skull features mediated fully the relationship between facial features and sex judgments. In females, the skull mediated this relationship only after adjusting facial features for the amount of body fat (estimated with bioimpedance). While body fat had a very slight positive influence on correct sex judgments about male faces, there was a robust negative influence of body fat on the correct sex judgments about female faces. Overall, these results suggest that craniofacial bone structure is essential for correct sex judgments about a male face. In females, body fat influences negatively the accuracy of sex judgments, and craniofacial bone structure alone cannot explain the relationship between facial features and identification of a face as female. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Young Debris Disks With Newly Discovered Emission Features

    NASA Astrophysics Data System (ADS)

    Ballering, N.

    2014-04-01

    We analyzed the Spitzer/IRS spectra of young A and F stars that host debris disks with previously unidentified silicate emission features. Such features probe small, warm dust grains in the inner regions of these young systems where terrestrial planet formation may be proceeding (Lisse et al. 2009). For most systems, these regions are too near their host star to be directly seen with high-contrast imaging and too warm to be imaged with submillimeter interferometers. Mid-infrared excess spectra - originating from the thermal emission of the debris disk dust - remain the best data to constrain the properties of the debris in these regions. For each target, we fit physically-motivated model spectra to the data. Typical spectra of unresolved debris disks are featureless and suffer severe degeneracies between the dust location and the grain properties; however, spectra with solid-state emission features provide significantly more information, allowing for a more accurate determination of the dust size, composition, and location (e.g. Chen et al. 2006; Olofsson et al. 2012). Our results shed light on the dynamic properties occurring in the terrestrial regions of these systems. For instance, the sizes of the smallest grains and the nature of the grain size distribution reveal whether the dust originates from steady-state collisional cascades or from stochastic collisions. The properties of the dust grains - such as their crystalline or amorphous structure - can inform us of grain processing mechanisms in the disk. The location of this debris illuminates where terrestrial planet forming activity is occurring. We used results from the Beta Pictoris - which has a well-resolved debris disk with emission features (Li et al. 2012) - to place our results in context. References: Chen et al. 2006, ApJS, 166, 351 Li et al. 2012, ApJ, 759, 81 Lisse et al. 2009, ApJ, 701, 2019 Olofsson et al. 2012, A&A, 542, A90

  2. How does increasingly plainer cigarette packaging influence adult smokers’ perceptions about brand image? An experimental study

    PubMed Central

    Wakefield, M A; Germain, D; Durkin, S J

    2008-01-01

    Background: Cigarette packaging is a key marketing strategy for promoting brand image. Plain packaging has been proposed to limit brand image, but tobacco companies would resist removal of branding design elements. Method: A 3 (brand types) × 4 (degree of plain packaging) between-subject experimental design was used, using an internet online method, to expose 813 adult Australian smokers to one randomly selected cigarette pack, after which respondents completed ratings of the pack. Results: Compared with current cigarette packs with full branding, cigarette packs that displayed progressively fewer branding design elements were perceived increasingly unfavourably in terms of smokers’ appraisals of the packs, the smokers who might smoke such packs, and the inferred experience of smoking a cigarette from these packs. For example, cardboard brown packs with the number of enclosed cigarettes displayed on the front of the pack and featuring only the brand name in small standard font at the bottom of the pack face were rated as significantly less attractive and popular than original branded packs. Smokers of these plain packs were rated as significantly less trendy/stylish, less sociable/outgoing and less mature than smokers of the original pack. Compared with original packs, smokers inferred that cigarettes from these plain packs would be less rich in tobacco, less satisfying and of lower quality tobacco. Conclusion: Plain packaging policies that remove most brand design elements are likely to be most successful in removing cigarette brand image associations. PMID:18827035

  3. How does increasingly plainer cigarette packaging influence adult smokers' perceptions about brand image? An experimental study.

    PubMed

    Wakefield, M A; Germain, D; Durkin, S J

    2008-12-01

    Cigarette packaging is a key marketing strategy for promoting brand image. Plain packaging has been proposed to limit brand image, but tobacco companies would resist removal of branding design elements. A 3 (brand types) x 4 (degree of plain packaging) between-subject experimental design was used, using an internet online method, to expose 813 adult Australian smokers to one randomly selected cigarette pack, after which respondents completed ratings of the pack. Compared with current cigarette packs with full branding, cigarette packs that displayed progressively fewer branding design elements were perceived increasingly unfavourably in terms of smokers' appraisals of the packs, the smokers who might smoke such packs, and the inferred experience of smoking a cigarette from these packs. For example, cardboard brown packs with the number of enclosed cigarettes displayed on the front of the pack and featuring only the brand name in small standard font at the bottom of the pack face were rated as significantly less attractive and popular than original branded packs. Smokers of these plain packs were rated as significantly less trendy/stylish, less sociable/outgoing and less mature than smokers of the original pack. Compared with original packs, smokers inferred that cigarettes from these plain packs would be less rich in tobacco, less satisfying and of lower quality tobacco. Plain packaging policies that remove most brand design elements are likely to be most successful in removing cigarette brand image associations.

  4. Multiple double cross-section transmission electron microscope sample preparation of specific sub-10 nm diameter Si nanowire devices.

    PubMed

    Gignac, Lynne M; Mittal, Surbhi; Bangsaruntip, Sarunya; Cohen, Guy M; Sleight, Jeffrey W

    2011-12-01

    The ability to prepare multiple cross-section transmission electron microscope (XTEM) samples from one XTEM sample of specific sub-10 nm features was demonstrated. Sub-10 nm diameter Si nanowire (NW) devices were initially cross-sectioned using a dual-beam focused ion beam system in a direction running parallel to the device channel. From this XTEM sample, both low- and high-resolution transmission electron microscope (TEM) images were obtained from six separate, specific site Si NW devices. The XTEM sample was then re-sectioned in four separate locations in a direction perpendicular to the device channel: 90° from the original XTEM sample direction. Three of the four XTEM samples were successfully sectioned in the gate region of the device. From these three samples, low- and high-resolution TEM images of the Si NW were taken and measurements of the NW diameters were obtained. This technique demonstrated the ability to obtain high-resolution TEM images in directions 90° from one another of multiple, specific sub-10 nm features that were spaced 1.1 μm apart.

  5. Gravitational lensing by ring-like structures

    NASA Astrophysics Data System (ADS)

    Lake, Ethan; Zheng, Zheng

    2017-02-01

    We study a class of gravitational lensing systems consisting of an inclined ring/belt, with and without an added point mass at the centre. We show that a common feature of such systems are so-called pseudo-caustics, across which the magnification of a point source changes discontinuously and yet remains finite. Such a magnification change can be associated with either a change in image multiplicity or a sudden change in the size of a lensed image. The existence of pseudo-caustics and the complex interplay between them and the formal caustics (which correspond to points of infinite magnification) can lead to interesting consequences, such as truncated or open caustics and a non-conservation of total image parity. The origin of the pseudo-caustics is found to be the non-differentiability of the solutions to the lens equation across the ring/belt boundaries, with the pseudo-caustics corresponding to ring/belt boundaries mapped into the source plane. We provide a few illustrative examples to understand the pseudo-caustic features, and in a separate paper consider a specific astronomical application of our results to study microlensing by extrasolar asteroid belts.

  6. A Fairy-Tale Landscape

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    Fun, fairy-tale nicknames have been assigned to features in this animated view of the workspace reachable by the robotic arm of NASA's Phoenix Mars Lander. For example, 'Sleepy Hollow' denotes a trench and 'Headless' designates a rock.

    A 'National Park,' marked by purple text and a purple arrow, has been set aside for protection until scientists and engineers have tested the operation of the robotic scoop. First touches with the scoop will be to the left of the 'National Park' line.

    Scientists use such informal names for easy identification of features of interest during the mission.

    In this view, rocks are circled in yellow, other areas of interest in green. The images were taken by the lander's 7-foot mast camera, called the Surface Stereo Imager.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. A low noise stenography method for medical images with QR encoding of patient information

    NASA Astrophysics Data System (ADS)

    Patiño-Vanegas, Alberto; Contreras-Ortiz, Sonia H.; Martinez-Santos, Juan C.

    2017-03-01

    This paper proposes an approach to facilitate the process of individualization of patients from their medical images, without compromising the inherent confidentiality of medical data. The identification of a patient from a medical image is not often the goal of security methods applied to image records. Usually, any identification data is removed from shared records, and security features are applied to determine ownership. We propose a method for embedding a QR-code containing information that can be used to individualize a patient. This is done so that the image to be shared does not differ significantly from the original image. The QR-code is distributed in the image by changing several pixels according to a threshold value based on the average value of adjacent pixels surrounding the point of interest. The results show that the code can be embedded and later fully recovered with minimal changes in the UIQI index - less than 0.1% of different.

  8. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  9. Diagnosis of the three-phase induction motor using thermal imaging

    NASA Astrophysics Data System (ADS)

    Glowacz, Adam; Glowacz, Zygfryd

    2017-03-01

    Three-phase induction motors are used in the industry commonly for example woodworking machines, blowers, pumps, conveyors, elevators, compressors, mining industry, automotive industry, chemical industry and railway applications. Diagnosis of faults is essential for proper maintenance. Faults may damage a motor and damaged motors generate economic losses caused by breakdowns in production lines. In this paper the authors develop fault diagnostic techniques of the three-phase induction motor. The described techniques are based on the analysis of thermal images of three-phase induction motor. The authors analyse thermal images of 3 states of the three-phase induction motor: healthy three-phase induction motor, three-phase induction motor with 2 broken bars, three-phase induction motor with faulty ring of squirrel-cage. In this paper the authors develop an original method of the feature extraction of thermal images MoASoID (Method of Areas Selection of Image Differences). This method compares many training sets together and it selects the areas with the biggest changes for the recognition process. Feature vectors are obtained with the use of mentioned MoASoID and image histogram. Next 3 methods of classification are used: NN (the Nearest Neighbour classifier), K-means, BNN (the back-propagation neural network). The described fault diagnostic techniques are useful for protection of three-phase induction motor and other types of rotating electrical motors such as: DC motors, generators, synchronous motors.

  10. The influence of stimulus format on drawing—a functional imaging study of decision making in portrait drawing

    PubMed Central

    Miall, R.C.; Nam, Se-Ho; Tchalenko, J.

    2014-01-01

    To copy a natural visual image as a line drawing, visual identification and extraction of features in the image must be guided by top-down decisions, and is usually influenced by prior knowledge. In parallel with other behavioral studies testing the relationship between eye and hand movements when drawing, we report here a functional brain imaging study in which we compared drawing of faces and abstract objects: the former can be strongly guided by prior knowledge, the latter less so. To manipulate the difficulty in extracting features to be drawn, each original image was presented in four formats including high contrast line drawings and silhouettes, and as high and low contrast photographic images. We confirmed the detailed eye–hand interaction measures reported in our other behavioral studies by using in-scanner eye-tracking and recording of pen movements with a touch screen. We also show that the brain activation pattern reflects the changes in presentation formats. In particular, by identifying the ventral and lateral occipital areas that were more highly activated during drawing of faces than abstract objects, we found a systematic increase in differential activation for the face-drawing condition, as the presentation format made the decisions more challenging. This study therefore supports theoretical models of how prior knowledge may influence perception in untrained participants, and lead to experience-driven perceptual modulation by trained artists. PMID:25128710

  11. Ripples or Dunes?

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This approximate true-color image taken by the Mars Exploration Rover Spirit's panoramic camera shows the windblown waves of soil that characterize the rocky surface of Gusev Crater, Mars. Scientists were puzzled about whether these geologic features were 'ripples' or 'dunes.' Ripples are shaped by gentle winds that deposit coarse grains on the tops or crests of the waves. Dunes are carved by faster winds and contain a more uniform distribution of material. Images taken of these features by the rover's microscopic imager on the 41st martian sol, or day, of the rover's mission revealed their identity to be ripples. This information helps scientists better understand the winds that shape the landscape of Mars. This image was taken early in Spirit's mission.

    [figure removed for brevity, see original site] Click on image for larger view [Image credit: NASA/JPL/ASU]

    This diagram illustrates how windblown sediments travel. There are three basic types of particles that undergo different motions depending on their size. These particles are dust, sand and coarse sand, and their sizes approximate flour, sugar, and ball bearings, respectively. Sand particles move along the 'saltation' path, hitting the surface downwind. When the sand hits the surface, it sends dust into the atmosphere and gives coarse sand a little shove. Mars Exploration Rover scientists are studying the distribution of material on the surface of Mars to better understand how winds shaped the landscape.

  12. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    PubMed

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  13. Video shot boundary detection using region-growing-based watershed method

    NASA Astrophysics Data System (ADS)

    Wang, Jinsong; Patel, Nilesh; Grosky, William

    2004-10-01

    In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.

  14. Computer-aided classification of breast masses using contrast-enhanced digital mammograms

    NASA Astrophysics Data System (ADS)

    Danala, Gopichandh; Aghaei, Faranak; Heidari, Morteza; Wu, Teresa; Patel, Bhavika; Zheng, Bin

    2018-02-01

    By taking advantages of both mammography and breast MRI, contrast-enhanced digital mammography (CEDM) has emerged as a new promising imaging modality to improve efficacy of breast cancer screening and diagnosis. The primary objective of study is to develop and evaluate a new computer-aided detection and diagnosis (CAD) scheme of CEDM images to classify between malignant and benign breast masses. A CEDM dataset consisting of 111 patients (33 benign and 78 malignant) was retrospectively assembled. Each case includes two types of images namely, low-energy (LE) and dual-energy subtracted (DES) images. First, CAD scheme applied a hybrid segmentation method to automatically segment masses depicting on LE and DES images separately. Optimal segmentation results from DES images were also mapped to LE images and vice versa. Next, a set of 109 quantitative image features related to mass shape and density heterogeneity was initially computed. Last, four multilayer perceptron-based machine learning classifiers integrated with correlationbased feature subset evaluator and leave-one-case-out cross-validation method was built to classify mass regions depicting on LE and DES images, respectively. Initially, when CAD scheme was applied to original segmentation of DES and LE images, the areas under ROC curves were 0.7585+/-0.0526 and 0.7534+/-0.0470, respectively. After optimal segmentation mapping from DES to LE images, AUC value of CAD scheme significantly increased to 0.8477+/-0.0376 (p<0.01). Since DES images eliminate overlapping effect of dense breast tissue on lesions, segmentation accuracy was significantly improved as compared to regular mammograms, the study demonstrated that computer-aided classification of breast masses using CEDM images yielded higher performance.

  15. Spatiotemporal models for the simulation of infrared backgrounds

    NASA Astrophysics Data System (ADS)

    Wilkes, Don M.; Cadzow, James A.; Peters, R. Alan, II; Li, Xingkang

    1992-09-01

    It is highly desirable for designers of automatic target recognizers (ATRs) to be able to test their algorithms on targets superimposed on a wide variety of background imagery. Background imagery in the infrared spectrum is expensive to gather from real sources, consequently, there is a need for accurate models for producing synthetic IR background imagery. We have developed a model for such imagery that will do the following: Given a real, infrared background image, generate another image, distinctly different from the one given, that has the same general visual characteristics as well as the first and second-order statistics of the original image. The proposed model consists of a finite impulse response (FIR) kernel convolved with an excitation function, and histogram modification applied to the final solution. A procedure for deriving the FIR kernel using a signal enhancement algorithm has been developed, and the histogram modification step is a simple memoryless nonlinear mapping that imposes the first order statistics of the original image onto the synthetic one, thus the overall model is a linear system cascaded with a memoryless nonlinearity. It has been found that the excitation function relates to the placement of features in the image, the FIR kernel controls the sharpness of the edges and the global spectrum of the image, and the histogram controls the basic coloration of the image. A drawback to this method of simulating IR backgrounds is that a database of actual background images must be collected in order to produce accurate FIR and histogram models. If this database must include images of all types of backgrounds obtained at all times of the day and all times of the year, the size of the database would be prohibitive. In this paper we propose improvements to the model described above that enable time-dependent modeling of the IR background. This approach can greatly reduce the number of actual IR backgrounds that are required to produce a sufficiently accurate mathematical model for synthesizing a similar IR background for different times of the day. Original and synthetic IR backgrounds will be presented. Previous research in simulating IR backgrounds was performed by Strenzwilk, et al., Botkin, et al., and Rapp. The most recent work of Strenzwilk, et al. was based on the use of one-dimensional ARMA models for synthesizing the images. Their results were able to retain the global statistical and spectral behavior of the original image, but the synthetic image was not visually very similar to the original. The research presented in this paper is the result of an attempt to improve upon their results, and represents a significant improvement in quality over previously obtained results.

  16. Exploring the color feature power for psoriasis risk stratification and classification: A data mining paradigm.

    PubMed

    Shrivastava, Vimal K; Londhe, Narendra D; Sonawane, Rajendra S; Suri, Jasjit S

    2015-10-01

    A large percentage of dermatologist׳s decision in psoriasis disease assessment is based on color. The current computer-aided diagnosis systems for psoriasis risk stratification and classification lack the vigor of color paradigm. The paper presents an automated psoriasis computer-aided diagnosis (pCAD) system for classification of psoriasis skin images into psoriatic lesion and healthy skin, which solves the two major challenges: (i) fulfills the color feature requirements and (ii) selects the powerful dominant color features while retaining high classification accuracy. Fourteen color spaces are discovered for psoriasis disease analysis leading to 86 color features. The pCAD system is implemented in a support vector-based machine learning framework where the offline image data set is used for computing machine learning offline color machine learning parameters. These are then used for transformation of the online color features to predict the class labels for healthy vs. diseased cases. The above paradigm uses principal component analysis for color feature selection of dominant features, keeping the original color feature unaltered. Using the cross-validation protocol, the above machine learning protocol is compared against the standalone grayscale features with 60 features and against the combined grayscale and color feature set of 146. Using a fixed data size of 540 images with equal number of healthy and diseased, 10 fold cross-validation protocol, and SVM of polynomial kernel of type two, pCAD system shows an accuracy of 99.94% with sensitivity and specificity of 99.93% and 99.96%. Using a varying data size protocol, the mean classification accuracies for color, grayscale, and combined scenarios are: 92.85%, 93.83% and 93.99%, respectively. The reliability of the system in these three scenarios are: 94.42%, 97.39% and 96.00%, respectively. We conclude that pCAD system using color space alone is compatible to grayscale space or combined color and grayscale spaces. We validated our pCAD system against facial color databases and the results are consistent in accuracy and reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. IEEE International Symposium on Biomedical Imaging.

    PubMed

    2017-01-01

    The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.

  18. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  19. Imaging features of orbital myxosarcoma in dogs.

    PubMed

    Dennis, Ruth

    2008-01-01

    Myxomas and myxosarcomas are infiltrative connective tissue tumors of fibroblastic origin that can be distinguished by the presence of abundant mucinous stroma. This paper describes the clinical and imaging features of orbital myxosarcoma in five dogs and suggests a predilection for the orbit. The main clinical signs were slowly progressive exophthalmos with soft swelling of the pterygopalatine fossa, and in two dogs, of the periorbital area. No pain was associated with the eye or orbit but one dog had pain on opening the mouth. The dogs were imaged using combinations of ultrasonography, radiography, and magnetic resonance imaging. In four dogs, extensive fluid-filled cavities in the orbit and fascial planes were seen and in the fifth dog, the tumor appeared more solid with small, peripheral cystic areas. In all dogs, the lesion extended along fascial planes to involve the temporomandibular joint, with osteolysis demonstrable in two dogs. Fluid aspirated from the cystic areas was viscous and sticky, mimicking that from a salivary mucocoele. Myxomas and myxosarcomas are known to be infiltrative and not readily amenable to surgical removal but their clinical course seems to be slow, with a reasonable survival time with palliative treatment. In humans, a juxta-articular form is recognized in which a prominent feature is the presence of dilated, cyst-like spaces filled with mucinous material. It is postulated that orbital myxosarcoma in dogs may be similar to the juxta-articular form in man, and may arise from the temporomandibular joint.

  20. Frequency compounding in multifrequency vibroacoustography

    NASA Astrophysics Data System (ADS)

    Urban, Matthew W.; Alizad, Azra; Fatemi, Mostafa

    2009-02-01

    Vibro-acoustography is a speckle-free ultrasound based imaging modality that can visualize normal and abnormal soft tissue through mapping stimulated acoustic emission. The acoustic emission is generated by focusing two ultrasound beams of slightly different frequencies (Δf = f1-f2) to the same spatial location and vibrating the tissue as a result of ultrasound radiation force. Reverberation of the acoustic emission can create dark and bright areas in the image that affect overall image contrast and detectability of abnormal tissue. Using finite length tonebursts yields acoustic emission at Δf and at sidebands centered about Δf that originate from the temporal toneburst gating. Separate images are formed by bandpass filtering the acoustic emission at Δf and the associated sidebands. The data at these multiple frequencies are compounded through coherent or incoherent processes to reduce the artifacts associated with reverberation of the acoustic emission. Experimental results from a urethane breast phantom and in vivo human breast scans are shown. The reduction in reverberation artifacts are analyzed using a smoothness metric which uses the variances of the gray levels of the original images and those formed through coherent and incoherent compounding of image data. This smoothness metric is minimized when the overall image background is smooth while image features are still preserved. The smoothness metric indicates that the images improved by factors from 1.23-4.33 and 1.09-2.68 in phantom and in vivo studies, respectively. The coherent and incoherent compounding of multifrequency data demonstrate, both qualitatively and quantitatively, the efficacy of this method for reduction of reverberation artifacts.

  1. Fretted Terrain Valleys

    NASA Technical Reports Server (NTRS)

    2004-01-01

    30 October 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows shallow tributary valleys in the Ismenius Lacus fretted terrain region of northern Arabia Terra. These valleys exhibit a variety of typical fretted terrain valley wall and floor textures, including a lineated, pitted material somewhat reminiscent of the surface of a brain. Origins for these features are still being debated within the Mars science community; there are no clear analogs to these landforms on Earth. This image is located near 39.9oN, 332.1oW. The picture covers an area about 3 km (1.9 mi) wide. Sunlight illuminates the scene from the lower left.

  2. A novel deep learning-based approach to high accuracy breast density estimation in digital mammography

    NASA Astrophysics Data System (ADS)

    Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo

    2017-03-01

    Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.

  3. Beach Observations using Quadcopter Imagery

    NASA Astrophysics Data System (ADS)

    Yang, Yi-Chung; Wang, Hsing-Yu; Fang, Hui-Ming; Hsiao, Sung-Shan; Tsai, Cheng-Han

    2017-04-01

    Beaches are the places where the interaction of the land and sea takes place, and it is under the influence of many environmental factors, including meteorological and oceanic ones. To understand the evolution or changes of beaches, it may require constant monitoring. One way to monitor the beach changes is to use optical cameras. With careful placements of ground control points, land-based optical cameras, which are inexpensive compared to other remote sensing apparatuses, can be used to survey a relatively large area in a short time. For example, we have used terrestrial optical cameras incorporated with ground control points to monitor beaches. The images from the cameras were calibrated by applying the direct linear transformation, projective transformation, and Sobel edge detector to locate the shoreline. The terrestrial optical cameras can record the beach images continuous, and the shorelines can be satisfactorily identified. However, the terrestrial cameras have some limitations. First, the camera system set a sufficiently high level so that the camera can cover the whole area that is of interest; such a location may not be available. The second limitation is that objects in the image have a different resolution, depending on the distance of objects from the cameras. To overcome these limitations, the present study tested a quadcopter equipped with a down-looking camera to record video and still images of a beach. The quadcopter can be controlled to hover at one location. However, the hovering of the quadcopter can be affected by the wind, since it is not positively anchored to a structure. Although the quadcopter has a gimbal mechanism to damp out tiny shakings of the copter, it will not completely counter movements due to the wind. In our preliminary tests, we have flown the quadcopter up to 500 m high to record 10-minnte video. We then took a 10-minute average of the video data. The averaged image of the coast was blurred because of the time duration of the video and the small movement caused by the quadcopter trying to return to its original position, which is caused by the wind. To solve this problem, the feature detection technique of Speeded Up Robust Features (SURF) method was used on the image of the video, and the resulting image was much sharper than that original image. Next, we extracted the maximum and minimum of RGB value of each pixel, respectively, of the 10-minutes videos. The beach breaker zone showed up in the maximum RGB image as white color areas. Moreover, we were also able to remove the breaker from the images and see the breaker zone bottom features using minimum RGB value of the images. From this test, we also identified the location of the coastline. It was found that the correlation coefficient between the coastline identified by the copter image and that by the ground survey was as high as 0.98. By repeating this copter flight at different times, we could measure the evolution of the coastline.

  4. Features of the normal choriocapillaris with OCT-angiography: Density estimation and textural properties.

    PubMed

    Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo

    2017-01-01

    The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.

  5. Jupiter's Moons: Family Portrait

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This montage shows the best views of Jupiter's four large and diverse 'Galilean' satellites as seen by the Long Range Reconnaissance Imager (LORRI) on the New Horizons spacecraft during its flyby of Jupiter in late February 2007. The four moons are, from left to right: Io, Europa, Ganymede and Callisto. The images have been scaled to represent the true relative sizes of the four moons and are arranged in their order from Jupiter.

    Io, 3,640 kilometers (2,260 miles) in diameter, was imaged at 03:50 Universal Time on February 28 from a range of 2.7 million kilometers (1.7 million miles). The original image scale was 13 kilometers per pixel, and the image is centered at Io coordinates 6 degrees south, 22 degrees west. Io is notable for its active volcanism, which New Horizons has studied extensively.

    Europa, 3,120 kilometers (1,938 miles) in diameter, was imaged at 01:28 Universal Time on February 28 from a range of 3 million kilometers (1.8 million miles). The original image scale was 15 kilometers per pixel, and the image is centered at Europa coordinates 6 degrees south, 347 degrees west. Europa's smooth, icy surface likely conceals an ocean of liquid water. New Horizons obtained data on Europa's surface composition and imaged subtle surface features, and analysis of these data may provide new information about the ocean and the icy shell that covers it.

    New Horizons spied Ganymede, 5,262 kilometers (3,268 miles) in diameter, at 10:01 Universal Time on February 27 from 3.5 million kilometers (2.2 million miles) away. The original scale was 17 kilometers per pixel, and the image is centered at Ganymede coordinates 6 degrees south, 38 degrees west. Ganymede, the largest moon in the solar system, has a dirty ice surface cut by fractures and peppered by impact craters. New Horizons' infrared observations may provide insight into the composition of the moon's surface and interior.

    Callisto, 4,820 kilometers (2,995 miles) in diameter, was imaged at 03:50 Universal Time on February 28 from a range of 4.2 million kilometers (2.6 million miles). The original image scale was 21 kilometers per pixel, and the image is centered at Callisto coordinates 4 degrees south, 356 degrees west. Scientists are using the infrared spectra New Horizons gathered of Callisto's ancient, cratered surface to calibrate spectral analysis techniques that will help them to understand the surfaces of Pluto and its moon Charon when New Horizons passes them in 2015.

  6. Robust Face Detection from Still Images

    DTIC Science & Technology

    2014-01-01

    significant change in false acceptance rates. Keywords— face detection; illumination; skin color variation; Haar-like features; OpenCV I. INTRODUCTION... OpenCV and an algorithm which used histogram equalization. The test is performed against 17 subjects under 576 viewing conditions from the extended Yale...original OpenCV algorithm proved the least accurate, having a hit rate of only 75.6%. It also had the lowest FAR but only by a slight margin at 25.2

  7. Facial Attractiveness Assessment using Illustrated Questionnairers

    PubMed Central

    MESAROS, ANCA; CORNEA, DANIELA; CIOARA, LIVIU; DUDEA, DIANA; MESAROS, MICHAELA; BADEA, MINDRA

    2015-01-01

    Introduction. An attractive facial appearance is considered nowadays to be a decisive factor in establishing successful interactions between humans. In relation to this topic, scientific literature states that some of the facial features have more impact then others, and important authors revealed that certain proportions between different anthropometrical landmarks are mandatory for an attractive facial appearance. Aim. Our study aims to assess if certain facial features count differently in people’s opinion while assessing facial attractiveness in correlation with factors such as age, gender, specific training and culture. Material and methods. A 5-item multiple choice illustrated questionnaire was presented to 236 dental students. The Photoshop CS3 software was used in order to obtain the sets of images for the illustrated questions. The original image was handpicked from the internet by a panel of young dentists from a series of 15 pictures of people considered to have attractive faces. For each of the questions, the images presented were simulating deviations from the ideally symmetric and proportionate face. The sets of images consisted in multiple variations of deviations mixed with the original photo. Junior and sophomore year students from our dental medical school, having different nationalities were required to participate in our questionnaire. Simple descriptive statistics were used to interpret the data. Results. Assessing the results obtained from the questionnaire it was observed that a majority of students considered as unattractive the overdevelopment of the lower third, while the initial image with perfect symmetry and proportion was considered as the most attractive by only 38.9% of the subjects. Likewise, regarding the symmetry 36.86% considered unattractive the canting of the inter-commissural line. The interviewed subjects considered that for a face to be attractive it needs to have harmonious proportions between the different facial elements. Conclusions. Considering an evaluation of facial attractiveness it is important to keep in mind that such assessment is subjective and influenced by multiple factors, among which the most important are cultural background and specific training. PMID:26528052

  8. Chaotic Star Birth

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Click on the image for Poster VersionClick on the image for IRAS 4B Inset

    Located 1,000 light years from Earth in the constellation Perseus, a reflection nebula called NGC 1333 epitomizes the beautiful chaos of a dense group of stars being born. Most of the visible light from the young stars in this region is obscured by the dense, dusty cloud in which they formed. With NASA's Spitzer Space Telescope, scientists can detect the infrared light from these objects. This allows a look through the dust to gain a more detailed understanding of how stars like our sun begin their lives.

    The young stars in NGC 1333 do not form a single cluster, but are split between two sub-groups. One group is to the north near the nebula shown as red in the image. The other group is south, where the features shown in yellow and green abound in the densest part of the natal gas cloud. With the sharp infrared eyes of Spitzer, scientists can detect and characterize the warm and dusty disks of material that surround forming stars. By looking for differences in the disk properties between the two subgroups, they hope to find hints of the star and planet formation history of this region.

    The knotty yellow-green features located in the lower portion of the image are glowing shock fronts where jets of material, spewed from extremely young embryonic stars, are plowing into the cold, dense gas nearby. The sheer number of separate jets that appear in this region is unprecedented. This leads scientists to believe that by stirring up the cold gas, the jets may contribute to the eventual dispersal of the gas cloud, preventing more stars from forming in NGC 1333.

    In contrast, the upper portion of the image is dominated by the infrared light from warm dust, shown as red.

  9. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.

    PubMed

    Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan

    2018-06-01

    Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Automated detection of nerve fiber layer defects on retinal fundus images using fully convolutional network for early diagnosis of glaucoma

    NASA Astrophysics Data System (ADS)

    Watanabe, Ryusuke; Muramatsu, Chisako; Ishida, Kyoko; Sawada, Akira; Hatanaka, Yuji; Yamamoto, Tetsuya; Fujita, Hiroshi

    2017-03-01

    Early detection of glaucoma is important to slow down progression of the disease and to prevent total vision loss. We have been studying an automated scheme for detection of a retinal nerve fiber layer defect (NFLD), which is one of the earliest signs of glaucoma on retinal fundus images. In our previous study, we proposed a multi-step detection scheme which consists of Gabor filtering, clustering and adaptive thresholding. The problems of the previous method were that the number of false positives (FPs) was still large and that the method included too many rules. In attempt to solve these problems, we investigated the end-to-end learning system without pre-specified features. A deep convolutional neural network (DCNN) with deconvolutional layers was trained to detect NFLD regions. In this preliminary investigation, we investigated effective ways of preparing the input images and compared the detection results. The optimal result was then compared with the result obtained by the previous method. DCNN training was carried out using original images of abnormal cases, original images of both normal and abnormal cases, ellipse-based polar transformed images, and transformed half images. The result showed that use of both normal and abnormal cases increased the sensitivity as well as the number of FPs. Although NFLDs are visualized with the highest contrast in green plane, the use of color images provided higher sensitivity than the use of green image only. The free response receiver operating characteristic curve using the transformed color images, which was the best among seven different sets studied, was comparable to that of the previous method. Use of DCNN has a potential to improve the generalizability of automated detection method of NFLDs and may be useful in assisting glaucoma diagnosis on retinal fundus images.

  11. Learning Compact Binary Face Descriptor for Face Recognition.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie

    2015-10-01

    Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

  12. Portable Hyperspectral Imaging Broadens Sensing Horizons

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Broadband multispectral imaging can be very helpful in showing differences in energy being radiated and is often employed by NASA satellites to monitor temperature and climate changes. In addition, hyperspectral imaging is ideal for advanced laboratory uses, biomedical imaging, forensics, counter-terrorism, skin health, food safety, and Earth imaging. Lextel Intelligence Systems, LLC, of Jackson, Mississippi purchased Photon Industries Inc., a spinoff company of NASA's Stennis Space Center and the Institute for Technology Development dedicated to developing new hyperspectral imaging technologies. Lextel has added new features to and expanded the applicability of the hyperspectral imaging systems. It has made advances in the size, usability, and cost of the instruments. The company now offers a suite of turnkey hyperspectral imaging systems based on the original NASA groundwork. It currently has four lines of hyperspectral imaging products: the EagleEye VNIR 100E, the EagleEye SWIR 100E, the EagleEye SWIR 200E, and the EagleEye UV 100E. These Lextel instruments are used worldwide for a wide variety of applications including medical, military, forensics, and food safety.

  13. Quantifying the effect of colorization enhancement on mammogram images

    NASA Astrophysics Data System (ADS)

    Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia

    2002-04-01

    Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).

  14. Featured Image: A Galaxy Plunges Into a Cluster Core

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2015-10-01

    The galaxy that takes up most of the frame in this stunning image (click for the full view!) is NGC 1427A. This is a dwarf irregular galaxy (unlike the fortuitously-located background spiral galaxy in the lower right corner of the image), and its currently in the process of plunging into the center of the Fornax galaxy cluster. Marcelo Mora (Pontifical Catholic University of Chile) and collaborators have analyzed observations of this galaxy made by both the Very Large Telescope in Chile and the Hubble Advanced Camera for Surveys, which produced the image shown here as a color composite in three channels. The team worked to characterize the clusters of star formation within NGC 1427A identifiable in the image as bright knots within the galaxy and determine how the interactions of this galaxy with its cluster environment affect the star formation within it. For more information and the original image, see the paper below.Citation:Marcelo D. Mora et al 2015 AJ 150 93. doi:10.1088/0004-6256/150/3/93

  15. Understanding refraction contrast using a comparison of absorption and refraction computed tomographic techniques

    NASA Astrophysics Data System (ADS)

    Wiebe, S.; Rhoades, G.; Wei, Z.; Rosenberg, A.; Belev, G.; Chapman, D.

    2013-05-01

    Refraction x-ray contrast is an imaging modality used primarily in a research setting at synchrotron facilities, which have a biomedical imaging research program. The most common method for exploiting refraction contrast is by using a technique called Diffraction Enhanced Imaging (DEI). The DEI apparatus allows the detection of refraction between two materials and produces a unique ''edge enhanced'' contrast appearance, very different from the traditional absorption x-ray imaging used in clinical radiology. In this paper we aim to explain the features of x-ray refraction contrast as a typical clinical radiologist would understand. Then a discussion regarding what needs to be considered in the interpretation of the refraction image takes place. Finally we present a discussion about the limitations of planar refraction imaging and the potential of DEI Computed Tomography. This is an original work that has not been submitted to any other source for publication. The authors have no commercial interests or conflicts of interest to disclose.

  16. Image edge detection based tool condition monitoring with morphological component analysis.

    PubMed

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Fast-moving features in the debris disk around AU Microscopii.

    PubMed

    Boccaletti, Anthony; Thalmann, Christian; Lagrange, Anne-Marie; Janson, Markus; Augereau, Jean-Charles; Schneider, Glenn; Milli, Julien; Grady, Carol; Debes, John; Langlois, Maud; Mouillet, David; Henning, Thomas; Dominik, Carsten; Maire, Anne-Lise; Beuzit, Jean-Luc; Carson, Joseph; Dohlen, Kjetil; Engler, Natalia; Feldt, Markus; Fusco, Thierry; Ginski, Christian; Girard, Julien H; Hines, Dean; Kasper, Markus; Mawet, Dimitri; Ménard, François; Meyer, Michael R; Moutou, Claire; Olofsson, Johan; Rodigas, Timothy; Sauvage, Jean-Francois; Schlieder, Joshua; Schmid, Hans Martin; Turatto, Massimo; Udry, Stephane; Vakili, Farrokh; Vigan, Arthur; Wahhaj, Zahed; Wisniewski, John

    2015-10-08

    In the 1980s, excess infrared emission was discovered around main-sequence stars; subsequent direct-imaging observations revealed orbiting disks of cold dust to be the source. These 'debris disks' were thought to be by-products of planet formation because they often exhibited morphological and brightness asymmetries that may result from gravitational perturbation by planets. This was proved to be true for the β Pictoris system, in which the known planet generates an observable warp in the disk. The nearby, young, unusually active late-type star AU Microscopii hosts a well-studied edge-on debris disk; earlier observations in the visible and near-infrared found asymmetric localized structures in the form of intensity variations along the midplane of the disk beyond a distance of 20 astronomical units. Here we report high-contrast imaging that reveals a series of five large-scale features in the southeast side of the disk, at projected separations of 10-60 astronomical units, persisting over intervals of 1-4 years. All these features appear to move away from the star at projected speeds of 4-10 kilometres per second, suggesting highly eccentric or unbound trajectories if they are associated with physical entities. The origin, localization, morphology and rapid evolution of these features are difficult to reconcile with current theories.

  18. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification

    NASA Astrophysics Data System (ADS)

    He, Zhi; Liu, Lin

    2016-11-01

    Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.

  19. Differentiation of Glioblastoma and Lymphoma Using Feature Extraction and Support Vector Machine.

    PubMed

    Yang, Zhangjing; Feng, Piaopiao; Wen, Tian; Wan, Minghua; Hong, Xunning

    2017-01-01

    Differentiation of glioblastoma multiformes (GBMs) and lymphomas using multi-sequence magnetic resonance imaging (MRI) is an important task that is valuable for treatment planning. However, this task is a challenge because GBMs and lymphomas may have a similar appearance in MRI images. This similarity may lead to misclassification and could affect the treatment results. In this paper, we propose a semi-automatic method based on multi-sequence MRI to differentiate these two types of brain tumors. Our method consists of three steps: 1) the key slice is selected from 3D MRIs and region of interests (ROIs) are drawn around the tumor region; 2) different features are extracted based on prior clinical knowledge and validated using a t-test; and 3) features that are helpful for classification are used to build an original feature vector and a support vector machine is applied to perform classification. In total, 58 GBM cases and 37 lymphoma cases are used to validate our method. A leave-one-out crossvalidation strategy is adopted in our experiments. The global accuracy of our method was determined as 96.84%, which indicates that our method is effective for the differentiation of GBM and lymphoma and can be applied in clinical diagnosis. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. An embedded face-classification system for infrared images on an FPGA

    NASA Astrophysics Data System (ADS)

    Soto, Javier E.; Figueroa, Miguel

    2014-10-01

    We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.

  1. Investigating Mars: Pavonis Mons

    NASA Image and Video Library

    2017-10-30

    This image shows part of the southern flank of Pavonis Mons. The linear and sinuous features mark the locations of lava tubes and graben that occur on both sides of the volcano along a regional trend that passes thru Pavonis Mons, Ascreaus Mons (to the north), and Arsia Mons (to the south). The majority of the features are believed to be lava tubes where the ceiling has collapsed into the free space below. This often happens starting in a circular pit and then expanding along length of the tube until the entire ceiling of material has collapsed into the bottom of the tube. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 7245 Latitude: -0.895004 Longitude: 246.225 Instrument: VIS Captured: 2003-08-02 22:23 https://photojournal.jpl.nasa.gov/catalog/PIA22016

  2. Prediction of Occult Invasive Disease in Ductal Carcinoma in Situ Using Deep Learning Features.

    PubMed

    Shi, Bibo; Grimm, Lars J; Mazurowski, Maciej A; Baker, Jay A; Marks, Jeffrey R; King, Lorraine M; Maley, Carlo C; Hwang, E Shelley; Lo, Joseph Y

    2018-03-01

    The aim of this study was to determine whether deep features extracted from digital mammograms using a pretrained deep convolutional neural network are prognostic of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy. In this retrospective study, digital mammographic magnification views were collected for 99 subjects with DCIS at biopsy, 25 of which were subsequently upstaged to invasive cancer. A deep convolutional neural network model that was pretrained on nonmedical images (eg, animals, plants, instruments) was used as the feature extractor. Through a statistical pooling strategy, deep features were extracted at different levels of convolutional layers from the lesion areas, without sacrificing the original resolution or distorting the underlying topology. A multivariate classifier was then trained to predict which tumors contain occult invasive disease. This was compared with the performance of traditional "handcrafted" computer vision (CV) features previously developed specifically to assess mammographic calcifications. The generalization performance was assessed using Monte Carlo cross-validation and receiver operating characteristic curve analysis. Deep features were able to distinguish DCIS with occult invasion from pure DCIS, with an area under the receiver operating characteristic curve of 0.70 (95% confidence interval, 0.68-0.73). This performance was comparable with the handcrafted CV features (area under the curve = 0.68; 95% confidence interval, 0.66-0.71) that were designed with prior domain knowledge. Despite being pretrained on only nonmedical images, the deep features extracted from digital mammograms demonstrated comparable performance with handcrafted CV features for the challenging task of predicting DCIS upstaging. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. A 'Pot of Gold' Rich with Nuggets

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This close-up image taken by the Mars Exploration Rover Spirit highlights the nodular nuggets that cover the rock dubbed 'Pot of Gold.' These nuggets appear to stand on the end of stalk-like features. The surface of the rock is dotted with fine-scale pits. Data from the rover's scientific instruments have shown that Pot of Gold contains the mineral hematite, which can be formed with or without water.

    Scientists are planning further observations of this rock, which they hope will yield more insight into the hematite's origins as well as how the enigmatic nuggets formed.

    This image was taken by Spirit's microscopic imager on sol 162 (June 17, 2004). The observed area is 3 centimeters by 3 centimeters (1.2 inches by 1.2 inches)

  4. A 'Pot of Gold' Rich with Nuggets (Sol 163-2)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This close-up image taken by the Mars Exploration Rover Spirit highlights the nobular nuggets that cover the rock dubbed 'Pot of Gold.' These nuggets appear to stand on the end of stalk-like features. The surface of the rock is dotted with fine-scale pits. Data from the rover's scientific instruments have shown that Pot of Gold contains the mineral hematite, which can be formed with or without water.

    Scientists are planning further observations of this rock, which they hope will yield more insight into the hematite's origins as well as how the enigmatic nuggets formed.

    This image was taken by Spirit's microscopic imager on sol 163 (June 18, 2004). The observed area is 3 centimeters by 3 centimeters (1.2 inches by 1.2 inches).

  5. A 'Pot of Gold' Rich with Nuggets (Sol 163)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This close-up image taken by the Mars Exploration Rover Spirit highlights the nodular nuggets that cover the rock dubbed 'Pot of Gold.' These nuggets appear to stand on the end of stalk-like features. The surface of the rock is dotted with fine-scale pits. Data from the rover's scientific instruments have shown that Pot of Gold contains the mineral hematite, which can be formed with or without water.

    Scientists are planning further observations of this rock, which they hope will yield more insight into the hematite's origins as well as how the enigmatic nuggets formed.

    This image was taken by Spirit's microscopic imager on sol 163 (June 18, 2004). The observed area is 3 centimeters by 3 centimeters (1.2 inches by 1.2 inches).

  6. 3. Credit USAF, ca. 1945. Original housed in the Records ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Credit USAF, ca. 1945. Original housed in the Records of the Defense Intelligence Agency. Record Group 373. National Archives. Cartographic and Architectural Branch. Washington, D.C. Aerial orthophoto map 16PS5M79-IV23 of Muroc Flight Test Base (North Base), north faces up with runway at the top and Rogers Dry Lake at the lower right. Ammunition huts (not extant in 1995) appear in a cluster just south of the west end of the runway. Note runway markings on lakebed. Linear feature at very top of image is rocket sled test track designed and built 1944-1945. - Edwards Air Force Base, North Base, North Base Road, Boron, Kern County, CA

  7. Mid-Infrared Observations of Possible Intergalactic Star Forming Regions in the Leo Ring

    NASA Astrophysics Data System (ADS)

    Giroux, Mark; Smith, B.; Struck, C.

    2011-05-01

    Within the Leo group of galaxies lies a gigantic loop of intergalactic gas known as the Leo Ring. Not clearly associated with any particular galaxy, its origin remains uncertain. It may be a primordial intergalactic cloud alternatively, it may be a collision ring, or have a tidal origin. Combining archival Spitzer images of this structure with published UV and optical data, we investigate the mid-infrared properties of possible knots of star formation in the ring. These sources are very faint in the mid-infrared compared to star forming regions in the tidal features of interacting galaxies. This suggests they are either deficient in dust, or they may not be associated with the ring.

  8. Investigating Mars: Pavonis Mons

    NASA Image and Video Library

    2017-11-03

    This image shows part of the southeastern flank of Pavonis Mons. Surface lava flows run down hill from the top left of the image to the bottom right. Perpendicular to that trend are several linear features. These are faults that encircle the volcano and also run along the linear trend through the three Tharsis volcanoes. This image illustrates how subsurface lava tubes collapse into the free space of the empty tube. Just to the top of the deepest depression are a series of circular pits. The pits coalesce into a linear feature near the left side of the deepest depression. The mode of formation of a lava tube starts with a surface lava flow. The sides and top of the flow cool faster than the center, eventually forming a solid, non-flowing cover of the still flowing lava. The surface flow may have followed the deeper fault block graben (a lower surface than the surroundings). Once the flow stops there remains the empty space lower than the surroundings, and collapse of the top of the tube starts in small pits which coalesce in the linear features. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 31330 Latitude: -1.26587 Longitude: 247.705 Instrument: VIS Captured: 2009-01-05 23:32 https://photojournal.jpl.nasa.gov/catalog/PIA22021

  9. Acidalia Planitia

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site]

    The small mounds with summit depressions in the northern portion of this image have an unknown origin. Some scientists think they may be cinder cones, while others think they may be pseudocraters, formed by the interaction of lava and ice. These features are also observed in other areas of Mars' northern plains, such as Isidis Planitia.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

    Image information: VIS instrument. Latitude XX, Longitude XX East (XX West). 19 meter/pixel resolution.

  10. A PC-based multispectral scanner data evaluation workstation: Application to Daedalus scanners

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; James, Mark W.; Smith, Matthew R.; Atkinson, Robert J.

    1991-01-01

    In late 1989, a personal computer (PC)-based data evaluation workstation was developed to support post flight processing of Multispectral Atmospheric Mapping Sensor (MAMS) data. The MAMS Quick View System (QVS) is an image analysis and display system designed to provide the capability to evaluate Daedalus scanner data immediately after an aircraft flight. Even in its original form, the QVS offered the portability of a personal computer with the advanced analysis and display features of a mainframe image analysis system. It was recognized, however, that the original QVS had its limitations, both in speed and processing of MAMS data. Recent efforts are presented that focus on overcoming earlier limitations and adapting the system to a new data tape structure. In doing so, the enhanced Quick View System (QVS2) will accommodate data from any of the four spectrometers used with the Daedalus scanner on the NASA ER2 platform. The QVS2 is designed around the AST 486/33 MHz CPU personal computer and comes with 10 EISA expansion slots, keyboard, and 4.0 mbytes of memory. Specialized PC-McIDAS software provides the main image analysis and display capability for the system. Image analysis and display of the digital scanner data is accomplished with PC-McIDAS software.

  11. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635

  12. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-07-19

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.

  13. Investigating Mars: Pavonis Mons

    NASA Image and Video Library

    2017-11-02

    This image shows part of the two summit calderas of Pavonis Mons. The surface in the majority of the image is the floor of the larger caldera. The smaller caldera occupies the bottom of the image. In both calderas the floor is predominately flat. The final summit flow would have pooled in the caldera and cooled forming the flat floor. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 17590 Latitude: 1.13446 Longitude: 247.411 Instrument: VIS Captured: 2005-12-01 17:26 https://photojournal.jpl.nasa.gov/catalog/PIA22020

  14. Analysis of Radar Images of Angkor, Cambodia

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony; Hensley, Scott; Moore, Elizabeth

    2000-01-01

    During the 1996 AIRSAR Pacific Rim Deployment, data were collected over Angkor in Cambodia. The temples of Angkor date the succession of cities to the 9th-13th century AD, but little is known of its prehistoric habitation. A related area of archaeological debate has been the origin, spiritual meaning and use of the hydraulic constructions in the urban zone. The high resolution, multi-channel capability of AIRSAR, together with the unprecedentedly accurate topography provided by TOPSAR, offer identification and delineation of these features. Examples include previously unrecorded circular earthworks around circular village sites, detection of unrecorded earthwork dykes, reservoirs and canal features, and of temple sites located some distance from the main temple complex at Angkor.

  15. Atypical progression of multiple myeloma with extensive extramedullary disease.

    PubMed Central

    Jowitt, S N; Jacobs, A; Batman, P A; Sapherson, D A

    1994-01-01

    Multiple myeloma is a neoplastic disorder caused by the proliferation of a transformed B lymphoid progenitor cell that gives rise to a clone of immunoglobulin-secreting cells. Other plasma cell tumours include solitary plasmacytoma of bone (SPB) and extramedullary plasmacytomas (EMP). Despite an apparent common origin there exist pathological and clinical differences between these neoplasms and the association between them is not completely understood. A case of IgG multiple myeloma that presented with typical clinical and laboratory features, including a bone marrow infiltrated by well differentiated plasma cells, is reported. The tumour had an unusual evolution, with the development of extensive extramedullary disease while maintaining mature histological features. Images PMID:8163701

  16. Bias correction for magnetic resonance images via joint entropy regularization.

    PubMed

    Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang

    2014-01-01

    Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.

  17. Tchebichef moment based restoration of Gaussian blurred images.

    PubMed

    Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C

    2016-11-10

    With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.

  18. Yardangs: Nature's Weathervanes

    NASA Image and Video Library

    2017-11-28

    The prominent tear-shaped features in this image from NASA's Mars Reconnaissance Orbiter (MRO) are erosional features called yardangs. Yardangs are composed of sand grains that have clumped together and have become more resistant to erosion than their surrounding materials. As the winds of Mars blow and erode away at the landscape, the more cohesive rock is left behind as a standing feature. (This Context Camera image shows several examples of yardangs that overlie the darker iron-rich material that makes up the lava plains in the southern portion of Elysium Planitia.) Resistant as they may be, the yardangs are not permanent, and will eventually be eroded away by the persistence of the Martian winds. For scientists observing the Red Planet, yardangs serve as a useful indicator of regional prevailing wind direction. The sandy structures are slowly eroded down and carved into elongated shapes that point in the downwind direction, like giant weathervanes. In this instance, the yardangs are all aligned, pointing towards north-northwest. This shows that the winds in this area generally gust in that direction. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 55.8 centimeters (21 inches) per pixel (with 2 x 2 binning); objects on the order of 167 centimeters (65.7 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22119

  19. Diabetic retinopathy grading by digital curvelet transform.

    PubMed

    Hajeb Mohammad Alipour, Shirin; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2012-01-01

    One of the major complications of diabetes is diabetic retinopathy. As manual analysis and diagnosis of large amount of images are time consuming, automatic detection and grading of diabetic retinopathy are desired. In this paper, we use fundus fluorescein angiography and color fundus images simultaneously, extract 6 features employing curvelet transform, and feed them to support vector machine in order to determine diabetic retinopathy severity stages. These features are area of blood vessels, area, regularity of foveal avascular zone, and the number of micro-aneurisms therein, total number of micro-aneurisms, and area of exudates. In order to extract exudates and vessels, we respectively modify curvelet coefficients of color fundus images and angiograms. The end points of extracted vessels in predefined region of interest based on optic disk are connected together to segment foveal avascular zone region. To extract micro-aneurisms from angiogram, first extracted vessels are subtracted from original image, and after removing detected background by morphological operators and enhancing bright small pixels, micro-aneurisms are detected. 70 patients were involved in this study to classify diabetic retinopathy into 3 groups, that is, (1) no diabetic retinopathy, (2) mild/moderate nonproliferative diabetic retinopathy, (3) severe nonproliferative/proliferative diabetic retinopathy, and our simulations show that the proposed system has sensitivity and specificity of 100% for grading.

  20. SPECTROSCOPY ALONG MULTIPLE, LENSED SIGHT LINES THROUGH OUTFLOWING WINDS IN THE QUASAR SDSS J1029+2623

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Misawa, Toru; Inada, Naohisa; Ohsuga, Ken

    2013-02-01

    We study the origin of absorption features on the blue side of the C IV broad emission line of the large-separation lensed quasar SDSS J1029+2623 at z{sub em} {approx} 2.197. The quasar images, produced by a foreground cluster of galaxies, have a maximum separation angle of {theta} {approx} 22.''5. The large angular separation suggests that the sight lines to the quasar central source can go through different regions of outflowing winds from the accretion disk of the quasar, providing a unique opportunity to study the structure of outflows from the accretion disk, a key ingredient for the evolution of quasarsmore » as well as for galaxy formation and evolution. Based on medium- and high-resolution spectroscopy of the two brightest images conducted at the Subaru telescope, we find that each image has different intrinsic levels of absorptions, which can be attributed either to variability of absorption features over the time delay between the lensed images, {Delta}t {approx} 744 days, or to the fine structure of quasar outflows probed by the multiple sight lines toward the quasar. While both these scenarios are consistent with the current data, we argue that they can be distinguished with additional spectroscopic monitoring observations.« less

  1. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.

  2. Natural image classification driven by human brain activity

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao

    2016-03-01

    Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.

  3. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.

  4. Thermal Analysis of Unusual Local-scale Features on the Surface of Vesta

    NASA Technical Reports Server (NTRS)

    Tosi, F.; Capria, M. T.; DeSanctis, M. C.; Capaccioni, F.; Palomba, E.; Zambon, F.; Ammannito, E.; Blewett, D. T.; Combe, J.-Ph.; Denevi, B. W.; hide

    2013-01-01

    At 525 km in mean diameter, Vesta is the second-most massive object in the main asteroid belt of our Solar System. At all scales, pyroxene absorptions are the most prominent spectral features on Vesta and overall, Vesta mineralogy indicates a complex magmatic evolution that led to a differentiated crust and mantle [1]. The thermal behavior of areas of unusual albedo seen on the surface at the local scale can be related to physical properties that can provide information about the origin of those materials. Dawn's Visible and Infrared Mapping Spectrometer (VIR) [2] hyperspectral images are routinely used, by means of temperature-retrieval algorithms, to compute surface temperatures along with spectral emissivities. Here we present temperature maps of several local-scale features of Vesta that were observed by Dawn under different illumination conditions and different local solar times.

  5. When Closure Fails: What the Radiologist Needs to Know About the Embryology, Anatomy, and Prenatal Imaging of Ventral Body Wall Defects.

    PubMed

    Torres, Ulysses S; Portela-Oliveira, Eduardo; Braga, Fernanda Del Campo Braojos; Werner, Heron; Daltro, Pedro Augusto Nascimento; Souza, Antônio Soares

    2015-12-01

    Ventral body wall defects (VBWDs) are one of the main categories of human congenital malformations, representing a wide and heterogeneous group of defects sharing a common feature, that is, herniation of one or more viscera through a defect in the anterior body wall. Gastroschisis and omphalocele are the 2 most common congenital VBWDs. Other uncommon anomalies include ectopia cordis and pentalogy of Cantrell, limb-body wall complex, and bladder and cloacal exstrophy. Although VBWDs are associated with multiple abnormalities with distinct embryological origins and that may affect virtually any system organs, at least in relation to anterior body wall defects, they are thought (except for omphalocele) to share a common embryologic mechanism, that is, a failure involving the lateral body wall folds responsible for closing the thoracic, abdominal, and pelvic portions of the ventral body wall during the fourth week of development. Additionally, many of the principles of diagnosis and management are similar for these conditions. Fetal ultrasound (US) in prenatal care allows the diagnosis of most of such defects with subsequent opportunities for parental counseling and optimal perinatal management. Fetal magnetic resonance imaging may be an adjunct to US, providing global and detailed anatomical information, assessing the extent of defects, and also helping to confirm the diagnosis in equivocal cases. Prenatal imaging features of VBWDs may be complex and challenging, often requiring from the radiologist a high level of suspicion and familiarity with the imaging patterns. Because an appropriate management is dependent on an accurate diagnosis and assessment of defects, radiologists should be able to recognize and distinguish between the different VBWDs and their associated anomalies. In this article, we review the relevant embryology of VBWDs to facilitate understanding of the pathologic anatomy and diagnostic imaging approach. Features will be illustrated with prenatal US and magnetic resonance imaging and correlated with postnatal and clinical imaging. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Two Faces of Pluto

    NASA Image and Video Library

    2015-07-01

    This pair of approximately true color images of Pluto and its big moon Charon, taken by NASA's New Horizons spacecraft, highlight the dramatically different appearance of different sides of the dwarf planet, and reveal never-before-seen details on Pluto's varied surface. The views were made by combining high-resolution black-and-white images from the Long Range Reconnaissance Imager (LORRI) with color information from the lower-resolution color camera that is part of the Ralph instrument. The left-hand image shows the side of Pluto that always faces away from Charon -- this is the side that will be seen at highest resolution by New Horizons when it makes its close approach to Pluto on July 14th. This hemisphere is dominated by a very dark region that extends along the equator and is redder than its surroundings, alongside a strikingly bright, paler-colored region which straddles the equator on the right-hand side of the disk. The opposite hemisphere, the side that faces Charon, is seen in the right-hand image. The most dramatic feature on this side of Pluto is a row of dark dots arranged along the equator. The origin of all these features is still mysterious, but may be revealed in the much more detailed images that will be obtained as the spacecraft continues its approach to Pluto. In both images, Charon shows a darker and grayer color than Pluto, and a conspicuous dark polar region. The left-hand image was obtained at 5:37 UT on June 25th 2015, at a distance from Pluto of 22.9 million kilometers (14.3 million miles) and has a central longitude of 152 degrees. The right-hand image was obtained at 23:15 UT on June 27th 2015, at a distance from Pluto of 19.7 million kilometers (12.2 million miles) with a central longitude of 358 degrees. Insets show the orientation of Pluto in each image -- the solid lines mark the equator and the prime meridian, which is defined to be the longitude that always faces Charon. The smallest visible features are about 200 km (120 miles) across. http://photojournal.jpl.nasa.gov/catalog/PIA19693

  7. The influence of stimulus format on drawing--a functional imaging study of decision making in portrait drawing.

    PubMed

    Miall, R C; Nam, Se-Ho; Tchalenko, J

    2014-11-15

    To copy a natural visual image as a line drawing, visual identification and extraction of features in the image must be guided by top-down decisions, and is usually influenced by prior knowledge. In parallel with other behavioral studies testing the relationship between eye and hand movements when drawing, we report here a functional brain imaging study in which we compared drawing of faces and abstract objects: the former can be strongly guided by prior knowledge, the latter less so. To manipulate the difficulty in extracting features to be drawn, each original image was presented in four formats including high contrast line drawings and silhouettes, and as high and low contrast photographic images. We confirmed the detailed eye-hand interaction measures reported in our other behavioral studies by using in-scanner eye-tracking and recording of pen movements with a touch screen. We also show that the brain activation pattern reflects the changes in presentation formats. In particular, by identifying the ventral and lateral occipital areas that were more highly activated during drawing of faces than abstract objects, we found a systematic increase in differential activation for the face-drawing condition, as the presentation format made the decisions more challenging. This study therefore supports theoretical models of how prior knowledge may influence perception in untrained participants, and lead to experience-driven perceptual modulation by trained artists. Copyright © 2014. Published by Elsevier Inc.

  8. Surface mineral maps of Afghanistan derived from HyMap imaging spectrometer data, version 2

    USGS Publications Warehouse

    Kokaly, Raymond F.; King, Trude V.V.; Hoefen, Todd M.

    2013-01-01

    This report presents a new version of surface mineral maps derived from HyMap imaging spectrometer data collected over Afghanistan in the fall of 2007. This report also describes the processing steps applied to the imaging spectrometer data. The 218 individual flight lines composing the Afghanistan dataset, covering more than 438,000 square kilometers, were georeferenced to a mosaic of orthorectified Landsat images. The HyMap data were converted from radiance to reflectance using a radiative transfer program in combination with ground-calibration sites and a network of cross-cutting calibration flight lines. The U.S. Geological Survey Material Identification and Characterization Algorithm (MICA) was used to generate two thematic maps of surface minerals: a map of iron-bearing minerals and other materials, which have their primary absorption features at the shorter wavelengths of the reflected solar wavelength range, and a map of carbonates, phyllosilicates, sulfates, altered minerals, and other materials, which have their primary absorption features at the longer wavelengths of the reflected solar wavelength range. In contrast to the original version, version 2 of these maps is provided at full resolution of 23-meter pixel size. The thematic maps, MICA summary images, and the material fit and depth images are distributed in digital files linked to this report, in a format readable by remote sensing software and Geographic Information Systems (GIS). The digital files can be downloaded from http://pubs.usgs.gov/ds/787/downloads/.

  9. To Great Depths

    NASA Image and Video Library

    2017-03-22

    Hellas is an ancient impact structure and is the deepest and broadest enclosed basin on Mars. It measures about 2,300 kilometers across and the floor of the basin, Hellas Planitia, contains the lowest elevations on Mars. The Hellas region can often be difficult to view from orbit due to seasonal frost, water-ice clouds and dust storms, yet this region is intriguing because of its diverse, and oftentimes bizarre, landforms. This image from eastern Hellas Planitia shows some of the unusual features on the basin floor. These relatively flat-lying "cells" appear to have concentric layers or bands, similar to a honeycomb. This "honeycomb" terrain exists elsewhere in Hellas, but the geologic process responsible for creating these features remains unresolved. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 157 centimeters (61.8 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21570

  10. Multiple Schwannomas of the Spine: Review of the Schwannomatosis or Congenital Neurilemmomatosis: A Case Report.

    PubMed

    Lee, Sang-Hoon; Kim, Se-Hoon; Kim, Bum-Joon; Lim, Dong-Jun

    2015-06-01

    Schwannomas are the most common benign nerve sheath tumors originating in Schwann cells. With special conditions like neurofibromatosis type 2 or entity called schwannomatosis, patients develop multiple schwannomas. But in clinical setting, distinguishing schwannomatosis from neurofibromatosis type 2 is challengeable. We describe 58-year-old male who presented with severe neuropathic pain, from schwannomatosis featuring multiple schwannomas of spine and trunk, and underwent surgical treatment. We demonstrate his radiologic and clinical findings, and discuss about important clinical features of this condition. To confirm schwannomatosis, we performed brain magnetic resonance imaging, and took his familial history. Staged surgery was done for pathological confirmation and relief of the pain. Schwannomatosis and neurofibromatosis type 2 are similar but different disease. There are diagnostic hallmarks of these conditions, including familial history, pathology, and brain imaging. Because of different prognosis, the two diseases must be distinguished, so diagnostic tests that are mentioned above should be performed in caution.

  11. Multiple Schwannomas of the Spine: Review of the Schwannomatosis or Congenital Neurilemmomatosis: A Case Report

    PubMed Central

    Lee, Sang-Hoon; Kim, Bum-Joon; Lim, Dong-Jun

    2015-01-01

    Schwannomas are the most common benign nerve sheath tumors originating in Schwann cells. With special conditions like neurofibromatosis type 2 or entity called schwannomatosis, patients develop multiple schwannomas. But in clinical setting, distinguishing schwannomatosis from neurofibromatosis type 2 is challengeable. We describe 58-year-old male who presented with severe neuropathic pain, from schwannomatosis featuring multiple schwannomas of spine and trunk, and underwent surgical treatment. We demonstrate his radiologic and clinical findings, and discuss about important clinical features of this condition. To confirm schwannomatosis, we performed brain magnetic resonance imaging, and took his familial history. Staged surgery was done for pathological confirmation and relief of the pain. Schwannomatosis and neurofibromatosis type 2 are similar but different disease. There are diagnostic hallmarks of these conditions, including familial history, pathology, and brain imaging. Because of different prognosis, the two diseases must be distinguished, so diagnostic tests that are mentioned above should be performed in caution. PMID:26217390

  12. Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.

    PubMed

    Haoliang Yuan; Yuan Yan Tang

    2017-04-01

    Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.

  13. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates.

    PubMed

    Zhang, Hao; Li, Xianqi; Chen, Yunmei; Park, Jewook; Li, An-Ping; Zhang, X-G

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a "rubber band" model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data.

  14. Internal tide transformation across a continental slope off Cape Sines, Portugal

    NASA Astrophysics Data System (ADS)

    Small, Justin

    2002-04-01

    During the INTIFANTE 99 experiment in July 1999, observations were made of a prominent internal undular bore off Cape Sines, Portugal. The feature was always present and dominant in a collection of synthetic aperture radar (SAR) images of the area covering the period before, during and after the trial. During the trial, rapid dissemination of SAR data to the survey ship enabled assessment of the progression of the feature, and the consequent planning of a survey of the bore coincident with a new SAR image. Large amplitude internal waves of 50 m amplitude in 250 m water depth, and 40 m in 100 m depth, were observed. The images show that the position of the feature is linked to the phase of the tide, suggesting an internal tide origin. The individual packets of internal waves contain up to seven waves with wavelengths in the range of 500-1500 m, and successive packets are separated by internal tide distances of typically 16-20 km, suggesting phase speeds of 0.35-0.45 m s -1. The internal waves were coherent over crest lengths of between 15 and 70 km, the longer wavefronts being due to the merging of packets. This paper uses the SAR data to detail the transformation of the wave packet as it passes across the continental slope and approaches the coast. The generation sites for the feature are discussed and reasons for its unusually large amplitude are hypothesised. It is concluded that generation at critical slopes of the bathymetry and non-linear interactions are the likely explanations for the large amplitudes.

  15. Application of PALSAR-2 remote sensing data for structural geology and topographic mapping in Kelantan river basin, Malaysia

    NASA Astrophysics Data System (ADS)

    Beiranvand Pour, Amin; Hashim, Mazlan

    2016-06-01

    Natural hazards of geological origin are one of major problem during heavy monsoons rainfall in Kelantan state, peninsular Malaysia. Several landslides occur in this region are obviously connected to geological and topographical features, every year. Satellite synthetic aperture radar (SAR) data are particularly applicable for detection of geological structural and topographical features in tropical conditions. In this study, Phased Array type L-band Synthetic Aperture Radar (PALSAR-2), remote sensing data were used to identify high potential risk and susceptible zones for landslide in the Kelantan river basin. Adaptive Local Sigma filter was selected and applied to accomplish speckle reduction and preserving both edges and features in PALSAR-2 fine mode observation images. Different polarization images were integrated to enhance geological structures. Additionally, directional filters were applied to the PALSAR-2 Local Sigma resultant image for edge enhancement and detailed identification of linear features. Several faults, drainage patterns and lithological contact layers were identified at regional scale. In order to assess the results, fieldwork and GPS survey were conducted in the landslide affected zones in the Kelantan river basin. Results demonstrate the most of the landslides were associated with N-S, NNW-SSE and NE-SW trending faults, angulate drainage pattern and metamorphic and Quaternary units. Consequently, geologic structural map were produced for Kelantan river basin using recent PALSAR-2 data, which could be broadly applicable for landslide hazard assessment and delineation of high potential risk and susceptible areas. Landslide mitigation programmes could be conducted in the landslide recurrence regions for reducing catastrophes leading to economic losses and death.

  16. Ultrasound based computer-aided-diagnosis of kidneys for pediatric hydronephrosis

    NASA Astrophysics Data System (ADS)

    Cerrolaza, Juan J.; Peters, Craig A.; Martin, Aaron D.; Myers, Emmarie; Safdar, Nabile; Linguraru, Marius G.

    2014-03-01

    Ultrasound is the mainstay of imaging for pediatric hydronephrosis, though its potential as diagnostic tool is limited by its subjective assessment, and lack of correlation with renal function. Therefore, all cases showing signs of hydronephrosis undergo further invasive studies, like diuretic renogram, in order to assess the actual renal function. Under the hypothesis that renal morphology is correlated with renal function, a new ultrasound based computer-aided diagnosis (CAD) tool for pediatric hydronephrosis is presented. From 2D ultrasound, a novel set of morphological features of the renal collecting systems and the parenchyma, is automatically extracted using image analysis techniques. From the original set of features, including size, geometric and curvature descriptors, a subset of ten features are selected as predictive variables, combining a feature selection technique and area under the curve filtering. Using the washout half time (T1/2) as indicative of renal obstruction, two groups are defined. Those cases whose T1/2 is above 30 minutes are considered to be severe, while the rest would be in the safety zone, where diuretic renography could be avoided. Two different classification techniques are evaluated (logistic regression, and support vector machines). Adjusting the probability decision thresholds to operate at the point of maximum sensitivity, i.e., preventing any severe case be misclassified, specificities of 53%, and 75% are achieved, for the logistic regression and the support vector machine classifier, respectively. The proposed CAD system allows to establish a link between non-invasive non-ionizing imaging techniques and renal function, limiting the need for invasive and ionizing diuretic renography.

  17. Texture analysis of aeromagnetic data for enhancing geologic features using co-occurrence matrices in Elallaqi area, South Eastern Desert of Egypt

    NASA Astrophysics Data System (ADS)

    Eldosouky, Ahmed M.; Elkhateeb, Sayed O.

    2018-06-01

    Enhancement of aeromagnetic data for qualitative purposes depends on the variations of texture and amplitude to outline various geologic features within the data. The texture of aeromagnetic data consists continuity of adjacent anomalies, size, and pattern. Variations in geology, or particularly rock magnetization, in a study area cause fluctuations in texture. In the present study, the anomalous features of Elallaqi area were extracted from aeromagnetic data. In order to delineate textures from the aeromagnetic data, the Red, Green, and Blue Co-occurrence Matrices (RGBCM) were applied to the reduced to the pole (RTP) grid of Elallaqi district in the South Eastern Desert of Egypt. The RGBCM are fashioned of sets of spatial analytical parameters that transform magnetic data into texture forms. Six texture features (parameters), i.e. Correlation, Contrast, Entropy, Homogeneity, Second Moment, and Variance, of RGB Co-occurrence Matrices (RGBCM) are used for analyzing the texture of the RTP grid in this study. These six RGBCM texture characteristics were mixed into a single image using principal component analysis. The calculated texture images present geologic characteristics and structures with much greater sidelong resolution than the original RTP grid. The estimated texture images enabled us to distinguish multiple geologic regions and structures within Elallaqi area including geologic terranes, lithologic boundaries, cracks, and faults. The faults of RGBCM maps were more represented than those of magnetic derivatives providing enhancement of the fine structures of Elallaqi area like the NE direction which scattered WNW metavolcanics and metasediments trending in the northwestern division of Elallaqi area.

  18. TU-G-204-05: The Effects of CT Acquisition and Reconstruction Conditions On Computed Texture Feature Values of Lung Lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, P; Young, S; Kim, G

    2015-06-15

    Purpose: Texture features have been investigated as a biomarker of response and malignancy. Because these features reflect local differences in density, they may be influenced by acquisition and reconstruction parameters. The purpose of this study was to investigate the effects of radiation dose level and reconstruction method on features derived from lung lesions. Methods: With IRB approval, 33 lung tumor cases were identified from clinically indicated thoracic CT scans in which the raw projection (sinogram) data were available. Based on a previously-published technique, noise was added to the raw data to simulate reduced-dose versions of each case at 25%, 10%more » and 3% of the original dose. Original and simulated reduced dose projection data were reconstructed with conventional and two iterative-reconstruction settings, yielding 12 combinations of dose/recon conditions. One lesion from each case was contoured. At the reference condition (full dose, conventional recon), 17 lesions were randomly selected for repeat contouring (repeatability). For each lesion at each dose/recon condition, 151 texture measures were calculated. A paired differences approach was employed to compare feature variation from repeat contours at the reference condition to the variation observed in other dose/recon conditions (reproducibility). The ratio of standard deviation of the reproducibility to repeatability was used as the variation measure for each feature. Results: The mean variation (standard deviation) across dose levels and kernel was significantly different with a ratio of 2.24 (±5.85) across texture features (p=0.01). The mean variation (standard deviation) across dose levels with conventional recon was also significantly different with 2.30 (7.11) (p=0.025). The mean variation across reconstruction settings of original dose has a trend in showing difference with 1.35 (2.60) among all features (p=0.09). Conclusion: Texture features varied considerably with variations in dose and reconstruction condition. Care should be taken to standardize these conditions when using texture as a quantitative feature. This effort supported in part by a grant from the National Cancer Institute’s Quantitative Imaging Network (QIN): U01 CA181156; The UCLA Department of Radiology has a Master Research Agreement with Siemens Healthcare; Dr. McNitt-Gray has previously received research support from Siemens Healthcare.« less

  19. Identification of vegetable diseases using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Tang, Jianjun; Li, Yao

    2007-02-01

    Vegetables are widely planted all over China, but they often suffer from the some diseases. A method of major technical and economical importance is introduced in this paper, which explores the feasibility of implementing fast and reliable automatic identification of vegetable diseases and their infection grades from color and morphological features of leaves. Firstly, leaves are plucked from clustered plant and pictures of the leaves are taken with a CCD digital color camera. Secondly, color and morphological characteristics are obtained by standard image processing techniques, for examples, Otsu thresholding method segments the region of interest, image opening following closing algorithm removes noise, Principal Components Analysis reduces the dimension of the original features. Then, a recently proposed boosting algorithm AdaBoost. M2 is applied to RBF networks for diseases classification based on the above features, where the kernel function of RBF networks is Gaussian form with argument taking Euclidean distance of the input vector from a center. Our experiment performs on the database collected by Chinese Academy of Agricultural Sciences, and result shows that Boosting RBF Networks classifies the 230 cucumber leaves into 2 different diseases (downy-mildew and angular-leaf-spot), and identifies the infection grades of each disease according to the infection degrees.

  20. Lamb wave detection of limpet mines on ship hulls.

    PubMed

    Bingham, Jill; Hinders, Mark; Friedman, Adam

    2009-12-01

    This paper describes the use of ultrasonic guided waves for identifying the mass loading due to underwater limpet mines on ship hulls. The Dynamic Wavelet Fingerprint Technique (DFWT) is used to render the guided wave mode information in two-dimensional binary images because the waveform features of interest are too subtle to identify in time domain. The use of wavelets allows both time and scale features from the original signals to be retained, and image processing can be used to automatically extract features that correspond to the arrival times of the guided wave modes. For further understanding of how the guided wave modes propagate through the real structures, a parallel processing, 3D elastic wave simulation is developed using the finite integration technique (EFIT). This full field, technique models situations that are too complex for analytical solutions, such as built up 3D structures. The simulations have produced informative visualizations of the guided wave modes in the structures as well as mimicking directly the output from sensors placed in the simulation space for direct comparison to experiments. Results from both drydock and in-water experiments with dummy mines are also shown.

  1. Hakumyi Crater from LAMO

    NASA Image and Video Library

    2017-07-20

    This close-up view of Hakumyi crater, as seen by NASA's Dawn spacecraft, provides insight into the origin of the small crater and lobe-shaped flow next to its southern rim. The sharp edges of these features indicate they are relatively recent with respect to the more subdued Hakumyi, which is 43 miles (70 kilometers) wide. The lobate flow ends in a tongue-shaped deposit. A more discrete feature slightly west (left) of the large lobe-shaped flow suggests an ancient or partially developed lobe. These kinds of flow features, which typically are found at high latitudes on Ceres, are expressions of what is termed "mass wasting," meaning the downslope movement of material. This process is initiated by slumping or detachment of material from crater rims. Here the process seems to have been triggered by small craters whose remnant shapes can be discerned at the top of each flow. Dawn took this image from its low-altitude mapping orbit, or LAMO, at a distance of about 240 miles (385 kilometers) above the surface. The center coordinates of this image are 52 degrees North latitude and 26 degrees east longitude. https://photojournal.jpl.nasa.gov/catalog/PIA21414

  2. Texture and color features for tile classification

    NASA Astrophysics Data System (ADS)

    Baldrich, Ramon; Vanrell, Maria; Villanueva, Juan J.

    1999-09-01

    In this paper we present the results of a preliminary computer vision system to classify the production of a ceramic tile industry. We focus on the classification of a specific type of tiles whose production can be affected by external factors, such as humidity, temperature, origin of clays and pigments. Variations on these uncontrolled factors provoke small differences in the color and the texture of the tiles that force to classify all the production. A constant and non- subjective classification would allow avoiding devolution from customers and unnecessary stock fragmentation. The aim of this work is to simulate the human behavior on this classification task by extracting a set of features from tile images. These features are induced by definitions from experts. To compute them we need to mix color and texture information and to define global and local measures. In this work, we do not seek a general texture-color representation, we only deal with textures formed by non-oriented colored-blobs randomly distributed. New samples are classified using Discriminant Analysis functions derived from known class tile samples. The last part of the paper is devoted to explain the correction of acquired images in order to avoid time and geometry illumination changes.

  3. Hardy Objects in Saturn F Ring

    NASA Image and Video Library

    2017-02-24

    As NASA's Cassini spacecraft continues its weekly ring-grazing orbits, diving just past the outside of Saturn F ring, it is tracking several small, persistent objects there. These images show two such objects that Cassini originally detected in spring 2016, as the spacecraft transitioned from more equatorial orbits to orbits at increasingly high inclination about the planet's equator. Imaging team members studying these objects gave them the informal designations F16QA (right image) and F16QB (left image). The researchers have observed that objects such as these occasionally crash through the F ring's bright core, producing spectacular collisional structures.While these objects may be mostly loose agglomerations of tiny ring particles, scientists suspect that small, fairly solid bodies lurk within each object, given that they have survived several collisions with the ring since their discovery. The faint retinue of dust around them is likely the result of the most recent collision each underwent before these images were obtained. The researchers think these objects originally form as loose clumps in the F ring core as a result of perturbations triggered by Saturn's moon Prometheus. . If they survive subsequent encounters with Prometheus, their orbits can evolve, eventually leading to core-crossing clumps that produce spectacular features, even though they collide with the ring at low speeds. The images were obtained using the Cassini spacecraft narrow-angle camera on Feb. 5, 2017, at a distance of 610,000 miles (982,000 kilometers, left image) and 556,000 miles (894,000 kilometers, right image) from the F ring. Image scale is about 4 miles (6 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21432

  4. Improved classification accuracy by feature extraction using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.

    2003-05-01

    A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.

  5. Patient-specific cardiac phantom for clinical training and preprocedure surgical planning.

    PubMed

    Laing, Justin; Moore, John; Vassallo, Reid; Bainbridge, Daniel; Drangova, Maria; Peters, Terry

    2018-04-01

    Minimally invasive mitral valve repair procedures including MitraClip ® are becoming increasingly common. For cases of complex or diseased anatomy, clinicians may benefit from using a patient-specific cardiac phantom for training, surgical planning, and the validation of devices or techniques. An imaging compatible cardiac phantom was developed to simulate a MitraClip ® procedure. The phantom contained a patient-specific cardiac model manufactured using tissue mimicking materials. To evaluate accuracy, the patient-specific model was imaged using computed tomography (CT), segmented, and the resulting point cloud dataset was compared using absolute distance to the original patient data. The result, when comparing the molded model point cloud to the original dataset, resulted in a maximum Euclidean distance error of 7.7 mm, an average error of 0.98 mm, and a standard deviation of 0.91 mm. The phantom was validated using a MitraClip ® device to ensure anatomical features and tools are identifiable under image guidance. Patient-specific cardiac phantoms may allow for surgical complications to be accounted for preoperative planning. The information gained by clinicians involved in planning and performing the procedure should lead to shorter procedural times and better outcomes for patients.

  6. Automatic video summarization driven by a spatio-temporal attention model

    NASA Astrophysics Data System (ADS)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  7. CEDIMS: cloud ethical DICOM image Mojette storage

    NASA Astrophysics Data System (ADS)

    Guédon, Jeanpierre; Evenou, Pierre; Tervé, Pierre; David, Sylvain; Béranger, Jérome

    2012-02-01

    Dicom images of patients will necessarily been stored in Clouds. However, ethical constraints must apply. In this paper, a method which provides the two following conditions is presented: 1) the medical information is not readable by the cloud owner since it is distributed along several clouds 2) the medical information can be retrieved from any sufficient subset of clouds In order to obtain this result in a real time processing, the Mojette transform is used. This paper reviews the interesting features of the Mojette transform in terms of information theory. Since only portions of the original Dicom files are stored into each cloud, their contents are not reachable. For instance, we use 4 different public clouds to save 4 different projections of each file, with the additional condition that any 3 over 4 projections are enough to reconstruct the original file. Thus, even if a cloud is unavailable when the user wants to load a Dicom file, the other 3 are giving enough information for real time reconstruction. The paper presents an implementation on 3 actual clouds. For ethical reasons, we use a Dicom image spreaded over 3 public clouds to show the obtained confidentiality and possible real time recovery.

  8. Union operation image processing of data cubes separately processed by different objective filters and its application to void analysis in an all-solid-state lithium-ion battery.

    PubMed

    Yamamoto, Yuta; Iriyama, Yasutoshi; Muto, Shunsuke

    2016-04-01

    In this article, we propose a smart image-analysis method suitable for extracting target features with hierarchical dimension from original data. The method was applied to three-dimensional volume data of an all-solid lithium-ion battery obtained by the automated sequential sample milling and imaging process using a focused ion beam/scanning electron microscope to investigate the spatial configuration of voids inside the battery. To automatically fully extract the shape and location of the voids, three types of filters were consecutively applied: a median blur filter to extract relatively larger voids, a morphological opening operation filter for small dot-shaped voids and a morphological closing operation filter for small voids with concave contrasts. Three data cubes separately processed by the above-mentioned filters were integrated by a union operation to the final unified volume data, which confirmed the correct extraction of the voids over the entire dimension contained in the original data. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. The Origin of Ina: Evidence for Inflated Lava Flows on the Moon

    NASA Technical Reports Server (NTRS)

    Garry, W. B.; Robinson, M. S.; Zimbelman, J. R.; Bleacher, J. E.; Hawke, B. R.; Crumpler, L. S.; Braden, S. E.; Sato, H.

    2012-01-01

    Ina is an enigmatic volcanic feature on the Moon known for its irregularly shaped mounds, the origin of which has been debated since the Apollo Missions. Three main units are observed on the floor of the depression (2.9 km across, < or =64 m deep) located at the summit of a low-shield volcano: irregularly shaped mounds up to 20 m tall, a lower unit 1 to 5 m in relief that surrounds the mounds, and blocky material. Analyses of Lunar Reconnaissance Orbiter Camera images and topography show that features in Ina are morphologically similar to terrestrial inflated lava flows. Comparison of these unusual lunar mounds and possible terrestrial analogs leads us to hypothesize that features in Ina were formed through lava flow inflation processes. While the source of the lava remains unclear, this new model suggests that as the mounds inflated, breakouts along their margins served as sources for surface flows that created the lower morphologic unit. Over time, mass wasting of both morphologic units has exposed fresh surfaces observed in the blocky unit. Ina is different than the terrestrial analogs presented in this study in that the lunar features formed within a depression, no vent sources are observed, and no cracks are observed on the mounds. However, lava flow inflation processes explain many of the morphologic relationships observed in Ina and are proposed to be analogous with inflated lava flows on Earth.

  10. The nature of pulsar radio emission

    NASA Astrophysics Data System (ADS)

    Dyks, J.; Rudak, B.; Demorest, P.

    2010-01-01

    High-quality averaged radio profiles of some pulsars exhibit double, highly symmetric features both in emission and in absorption. It is shown that both types of feature are produced by a split fan beam of extraordinary-mode curvature radiation that is emitted/absorbed by radially extended streams of magnetospheric plasma. With no emissivity in the plane of the stream, such a beam produces bifurcated emission components (BFCs) when our line of sight passes through the plane. An example of a double component created in this way is present in the averaged profile of the 5-ms pulsar J1012+5307. We show that the component can indeed be very well fitted by the textbook formula for the non-coherent beam of curvature radiation in the polarization state that is orthogonal to the plane of electron trajectory. The observed width of the BFC decreases with increasing frequency at a rate that confirms the curvature origin. Likewise, the double absorption features (double notches) are produced by the same beam of the extraordinary-mode curvature radiation, when it is eclipsed by thin plasma streams. The intrinsic property of curvature radiation to create bifurcated fan beams explains the double features in terms of a very natural geometry and implies the curvature origin of pulsar radio emission. Similarly, the `double conal' profiles of class D result from a cut through a wider stream with finite extent in magnetic azimuth. Therefore, their width reacts very slowly to changes of viewing geometry resulting from geodetic precession. The stream-cut interpretation implies a highly non-orthodox origin of both the famous S-swing of polarization angle and the low-frequency pulse broadening in D profiles. The azimuthal structure of polarization modes in the curvature radiation beam provides an explanation for the polarized `multiple imaging' and the edge depolarization of pulsar profiles.

  11. Modular structural elements in the replication origin region of Tetrahymena rDNA.

    PubMed Central

    Du, C; Sanzgiri, R P; Shaiu, W L; Choi, J K; Hou, Z; Benbow, R M; Dobbs, D L

    1995-01-01

    Computer analyses of the DNA replication origin region in the amplified rRNA genes of Tetrahymena thermophila identified a potential initiation zone in the 5'NTS [Dobbs, Shaiu and Benbow (1994), Nucleic Acids Res. 22, 2479-2489]. This region consists of a putative DNA unwinding element (DUE) aligned with predicted bent DNA segments, nuclear matrix or scaffold associated region (MAR/SAR) consensus sequences, and other common modular sequence elements previously shown to be clustered in eukaryotic chromosomal origin regions. In this study, two mung bean nuclease-hypersensitive sites in super-coiled plasmid DNA were localized within the major DUE-like element predicted by thermodynamic analyses. Three restriction fragments of the 5'NTS region predicted to contain bent DNA segments exhibited anomalous migration characteristic of bent DNA during electrophoresis on polyacrylamide gels. Restriction fragments containing the 5'NTS region bound Tetrahymena nuclear matrices in an in vitro binding assay, consistent with an association of the replication origin region with the nuclear matrix in vivo. The direct demonstration in a protozoan origin region of elements previously identified in Drosophila, chick and mammalian origin regions suggests that clusters of modular structural elements may be a conserved feature of eukaryotic chromosomal origins of replication. Images PMID:7784181

  12. Digital shaded relief image of a carbonate platform (northern Great Bahama Bank): Scenery seen and unseen

    NASA Astrophysics Data System (ADS)

    Boss, Stephen K.

    1996-11-01

    A mosaic image of the northern Great Bahama Bank was created from separate gray-scale Landsat images using photo-editing and image analysis software that is commercially available for desktop computers. Measurements of pixel gray levels (relative scale from 0 to 255 referred to as digital number, DN) on the mosaic image were compared to bank-top bathymetry (determined from a network of single-channel, high-resolution seismic profiles), bottom type (coarse sand, sandy mud, barren rock, or reef determined from seismic profiles and diver observations), and vegetative cover (presence and/or absence and relative density of the marine angiosperm Thalassia testudinum determined from diver observations). Results of these analyses indicate that bank-top bathymetry is a primary control on observed pixel DN, bottom type is a secondary control on pixel DN, and vegetative cover is a tertiary influence on pixel DN. Consequently, processing of the gray-scale Landsat mosaic with a directional gradient edge-detection filter generated a physiographic shaded relief image resembling bank-top bathymetric patterns related to submerged physiographic features across the platform. The visibility of submerged karst landforms, Pleistocene eolianite ridges, islands, and possible paleo-drainage patterns created during sea-level lowstands is significantly enhanced on processed images relative to the original mosaic. Bank-margin ooid shoals, platform interior sand bodies, reef edifices, and bidirectional sand waves are features resulting from Holocene carbonate deposition that are also more clearly visible on the new physiographic images. Combined with observational data (single-channel, high-resolution seismic profiles, bottom observations by SCUBA divers, sediment and rock cores) across the northern Great Bahama Bank, these physiographic images facilitate comprehension of areal relations among antecedent platform topography, physical processes, and ensuing depositional patterns during sea-level rise.

  13. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  14. Imaging articular cartilage using second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Mansfield, Jessica C.; Winlove, C. Peter; Knapp, Karen; Matcher, Stephen J.

    2006-02-01

    Sub cellular resolution images of equine articular cartilage have been obtained using both second harmonic generation microscopy (SHGM) and two-photon fluorescence microscopy (TPFM). The SHGM images clearly map the distribution of the collagen II fibers within the extracellular matrix while the TPFM images show the distribution of endogenous two-photon fluorophores in both the cells and the extracellular matrix, highlighting especially the pericellular matrix and bright 2-3μm diameter features within the cells. To investigate the source of TPF in the extracellular matrix experiments have been carried out to see if it may originate from the proteoglycans. Pure solutions of the following proteoglycans hyaluronan, chondroitin sulfate and aggrecan have been imaged, only the aggrecan produced any TPF and here the intensity was not great enough to account for the TPF in the extracellular matrix. Also cartilage samples were subjected to a process to remove proteoglycans and cellular components. After this process the TPF from the samples had decreased by a factor of two, with respect to the SHG intensity.

  15. Skeletonization of gray-scale images by gray weighted distance transform

    NASA Astrophysics Data System (ADS)

    Qian, Kai; Cao, Siqi; Bhattacharya, Prabir

    1997-07-01

    In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.

  16. Planned Visible Emission Line Space Solar Coronagraph on-board Aditya-1

    NASA Astrophysics Data System (ADS)

    Singh, Jagdev

    2012-07-01

    An imaging visible emission line internally occulted coronagraph using 20 cm off axis parabolic mirror has been designed and planned to be launched in 2014. The coronagraph will have the facility to take images of the solar simultaneously, in the green [Fe xiv] and the red [Fe x] emission lines up to 1.5 solar radii with a frequency of about 3 Hz using 0.5 nm pass band filters and the images in continuum at 580 nm up to 3 solar radii. The satellite has been named as Aditya-1 and the scientific objectives of this payload are: (i) to investigate the existence of intensity oscillations for the study of wave driven coronal heating, (ii) to study the dynamics and formation of coronal loops and temperature structure of the coronal features, (iii) to study the origin, cause and acceleration of Coronal Mass Ejections (CME's) and other solar active features, and (iv) Coronal magnetic field topology and the 3-dimensional structures of the CMEs using polarization information. The fabrication of the pay load will be done in the laboratories of LEOS, SAC, ISAC, IIA and USO and launched by ISRO. Here we shall discuss the design and the realization of the mission.

  17. An insect-inspired model for visual binding II: functional analysis and visual attention.

    PubMed

    Northcutt, Brandon D; Higgins, Charles M

    2017-04-01

    We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

  18. ARC-1989-A89-7024

    NASA Image and Video Library

    1989-08-23

    P-34679 Range : 2 million km. ( 1.2 million miles ) In this Voyager 2, wide-angle image, the two main rings of Neptune can be clearly seen. In the lower part of the frame, the originally-announced ring arc, consisting of three distinct features, is visible. This feature covers about 35 degrees of longitude and has yet to be radially resolved in Voyager Images. from higher resolution images it is known that this region contains much more material than the diffuse belts seen elsewhere in its orbit, which seem to encircle the planet. This is consistent with the fact that ground-based observations of stellar occultations by the rings show them to be very broken and clumpy. The more sensitive, wide-angle camera is revealing more widely distributed but fainter material. Each of these rings of material lies just outside of the orbit of a newly discovered moon. One of these moons, 1989N2, may be seen in the upper right corner. The moon is streaked by its orbital motion, whereas the stars in the frame are less smeared. the dark area around the bright moon and star are artifacts of the processing required to bring out the faint rings.

  19. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  20. A Tactile Carina Nebula

    NASA Astrophysics Data System (ADS)

    Grice, Noreen A.; Mutchler, M.

    2010-01-01

    Astronomy was once considered a science restricted to fully sighted participants. But in the past two decades, accessible books with large print/Braille and touchable pictures have brought astronomy and space science to the hands and mind's eye of students, regardless of their visual ability. A new universally-designed tactile image featuring the Hubble mosaic of the Carina Nebula is being presented at this conference. The original dataset was obtained with Hubble's Advanced Camera for Surveys (ACS) hydrogen-alpha filter in 2005. It became an instant icon after being infused with additional color information from ground-based CTIO data, and released as Hubble's 17th anniversary image. Our tactile Carina Nebula promotes multi-mode learning about the entire life-cycle of stars, which is dramatically illustrated in this Hubble mosaic. When combined with descriptive text in print and Braille, the visual and tactile components seamlessly reach both sighted and blind populations. Specific touchable features of the tactile image identify the shapes and orientations of objects in the Carina Nebula that include star-forming regions, jets, pillars, dark and light globules, star clusters, shocks/bubbles, the Keyhole Nebula, and stellar death (Eta Carinae). Visit our poster paper to touch the Carina Nebula!

  1. An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.

    2015-01-01

    Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.

  2. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines

    PubMed Central

    Press, William H.

    2006-01-01

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N × N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude. PMID:17159155

  3. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines.

    PubMed

    Press, William H

    2006-12-19

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N x N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude.

  4. Investigating Mars: Arsia Mons

    NASA Image and Video Library

    2017-12-28

    This VIS image shows part of the northwestern margin of the summit caldera. Along with the faults caused by the collapse of the summit materials into the void of the emptied magma chamber, there are many small lobate lava flows and collapse features. The scalloped depressions are most likely created by collapse of the roof of lava tubes. Lava tubes originate during eruption event, when the margins of a flow harden around a still flowing lava stream. When an eruption ends these can become hollow tubes within the flow. With time, the roof of the tube may collapse into the empty space below. The tubes are linear, so the collapse of the roof creates a linear depression. This image illustrates the many processes that occurred in the formation of the volcano. Arsia Mons is the southernmost of the Tharsis volcanoes. It is 270 miles (450km) in diameter, almost 12 miles (20km) high, and the summit caldera is 72 miles (120km) wide. For comparison, the largest volcano on Earth is Mauna Loa. From its base on the sea floor, Mauna Loa measures only 6.3 miles high and 75 miles in diameter. A large volcanic crater known as a caldera is located at the summit of all of the Tharsis volcanoes. These calderas are produced by massive volcanic explosions and collapse. The Arsia Mons summit caldera is larger than many volcanoes on Earth. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 17117 Latitude: -8.43321 Longitude: 239.488 Instrument: VIS Captured: 2005-10-23 16:52 https://photojournal.jpl.nasa.gov/catalog/PIA22153

  5. Investigating Mars: Pavonis Mons

    NASA Image and Video Library

    2017-11-07

    This image shows part smaller summit caldera of Pavonis Mons. This caldera is approximately 5km deep. Near the bottom of the image is a region where part of the caldera side has collapsed into the bottom of the caldera. In shield volcanoes calderas are typically formed where the surface collapses into the void formed by an emptied magma chamber. Pavonis Mons is one of the three aligned Tharsis Volcanoes. The four Tharsis volcanoes are Ascreaus Mons, Pavonis Mons, Arsia Mons, and Olympus Mars. All four are shield type volcanoes. Shield volcanoes are formed by lava flows originating near or at the summit, building up layers upon layers of lava. The Hawaiian islands on Earth are shield volcanoes. The three aligned volcanoes are located along a topographic rise in the Tharsis region. Along this trend there are increased tectonic features and additional lava flows. Pavonis Mons is the smallest of the four volcanoes, rising 14km above the mean Mars surface level with a width of 375km. It has a complex summit caldera, with the smallest caldera deeper than the larger caldera. Like most shield volcanoes the surface has a low profile. In the case of Pavonis Mons the average slope is only 4 degrees. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 32776 Latitude: 0.446561 Longitude: 247.283 Instrument: VIS Captured: 2009-05-05 03:21 https://photojournal.jpl.nasa.gov/catalog/PIA22023

  6. 3D synchrotron x-ray microtomography of paint samples

    NASA Astrophysics Data System (ADS)

    Ferreira, Ester S. B.; Boon, Jaap J.; van der Horst, Jerre; Scherrer, Nadim C.; Marone, Federica; Stampanoni, Marco

    2009-07-01

    Synchrotron based X-ray microtomography is a novel way to examine paint samples. The three dimensional distribution of pigment particles, binding media and their deterioration products as well as other features such as voids, are made visible in their original context through a computing environment without the need of physical sectioning. This avoids manipulation related artefacts. Experiments on paint chips (approximately 500 micron wide) were done on the TOMCAT beam line (TOmographic Microscopy and Coherent rAdiology experimenTs) at the Paul Scherrer Institute in Villigen, CH, using an x-ray energy of up to 40 keV. The x-ray absorption images are obtained at a resolution of 350 nm. The 3D dataset was analysed using the commercial 3D imaging software Avizo 5.1. Through this process, virtual sections of the paint sample can be obtained in any orientation. One of the topics currently under research are the ground layers of paintings by Cuno Amiet (1868- 1961), one of the most important Swiss painters of classical modernism, whose early work is currently the focus of research at the Swiss Institute for Art Research (SIK-ISEA). This technique gives access to information such as sample surface morphology, porosity, particle size distribution and even particle identification. In the case of calcium carbonate grounds for example, features like microfossils present in natural chalks, can be reconstructed and their species identified, thus potentially providing information towards the mineral origin. One further elegant feature of this technique is that a target section can be selected within the 3D data set, before exposing it to obtain chemical data. Virtual sections can then be compared with cross sections of the same samples made in the traditional way.

  7. Unsupervised feature learning for autonomous rock image classification

    NASA Astrophysics Data System (ADS)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  8. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  9. Late Afternoon Sun

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [figure removed for brevity, see original site]

    This image of the northern plains of Mars shows a surface texture of hundreds of small mounds and numerous small impact craters. The THEMIS imaging team is taking advantage of the late afternoon sun illumination to image places like this where the surface may contain small scale features that are 'washed-out' by higher illumination angles. As the sun dips towards the horizon (to the left side of the image), shadows are cast. The length of the shadows can be used to estimate the height of the feature casting them - or the depth of the crater that contains the shadow. In this image the craters - even very small ones - are now partially filled by shadow making it very easy to identify them. The small bumps are not casting shadows yet, but are easily seen. These small bumps were not easily identified when the sun angle was higher (earlier in the afternoon). As this image shows, late afternoon sun illumination is wonderful for making small scale morphologic features visible.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  10. How Phoenix Looks Under Itself

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This is an animation of NASA's Phoenix Mars Lander reaching with its Robotic Arm and taking a picture of the surface underneath the lander. The image at the conclusion of the animation was taken by Phoenix's Robotic Arm Camera (RAC) on the eighth Martian day of the mission, or Sol 8 (June 2, 2008). The light feature in the middle of the image below the leg is informally called 'Holy Cow.' The dust, shown in the dark foreground, has been blown off of 'Holy Cow' by Phoenix's thruster engines.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  11. Cassini/VIMS observations of the moon

    USGS Publications Warehouse

    Belluci, G.; Brown, R.H.; Formisano, V.; Baines, K.H.; Bibring, J.-P.; Buratti, B.J.; Capaccioni, F.; Cerroni, P.; Clark, R.N.; Coradini, A.; Cruikshank, D.P.; Drossart, P.; Jaumann, R.; Langevin, Y.; Matson, D.L.; McCord, T.B.; Mennella, V.; Miller, E.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, Christophe

    2002-01-01

    In this paper, we present preliminary scientific results obtained from the analysis of VIMS (Visible and Infrared Mapping Spectrometer) lunar images and spectra. These data were obtained during the Cassini Earth flyby in August 1999. Spectral ratios have been produced in order to derive lunar mineralogical maps. Some spectra observed at the north-east lunar limb, show few unusual absorption features located at 0.357, 0.430 and 0.452 ??m, the origin of which is presently unknown. ?? 2002 COSPAR. Published by Elsevier Science Ltd. All rights reserved.

  12. Crew Earth Observations (CEO) taken during Expedition 8

    NASA Image and Video Library

    2004-01-06

    ISS008-E-12107 (6 January 2004) --- Five year old icebergs near South Georgia Island are featured in this image photographed by an Expedition 8 crewmember onboard the International Space Station (ISS). This photo shows two pieces of a massive iceberg that broke off from the Antarctica Ronne Ice Shelf in October 1998. The pieces of iceberg A-38 have floated relatively close to South Georgia Island. After five years and 3 months, they are approximately 1500 nautical miles from their origin.

  13. A fuzzy clustering algorithm to detect planar and quadric shapes

    NASA Technical Reports Server (NTRS)

    Krishnapuram, Raghu; Frigui, Hichem; Nasraoui, Olfa

    1992-01-01

    In this paper, we introduce a new fuzzy clustering algorithm to detect an unknown number of planar and quadric shapes in noisy data. The proposed algorithm is computationally and implementationally simple, and it overcomes many of the drawbacks of the existing algorithms that have been proposed for similar tasks. Since the clustering is performed in the original image space, and since no features need to be computed, this approach is particularly suited for sparse data. The algorithm may also be used in pattern recognition applications.

  14. Metastatic Neuroendocrine Carcinoma of Unknown Origin Arising in the Femoral Nerve Sheath.

    PubMed

    Candy, Nicholas; Young, Adam; Allinson, Kieren; Carr, Oliver; McMillen, Jason; Trivedi, Rikin

    2017-08-01

    Metastatic neuroendocrine carcinoma of unknown origin is a rare condition, usually presenting with lesions in the liver and/or lung. We present the first reported case of a metastatic neuroendocrine carcinoma of unknown origin arising in the femoral nerve sheath. Magnetic resonance imaging demonstrated what was thought to be a schwannoma in the left femoral nerve sheath in the proximal femoral triangle, immediately inferior to the anterior inferior iliac spine. At the time of operation, the tumor capsule was invading surrounding tissue, as well as three trunks of the femoral nerve. The patient underwent a subtotal resection, preserving the integrity of the residual functioning femoral nerve trunks. Histologic evaluation determined that the tumor had features consistent with a metastatic neuroendocrine carcinoma of unknown primary origin. The patient recovered well postoperatively, and subsequent radiologic evaluation failed to demonstrate a potential primary site. Unfortunately, the patient re-presented with disease progression and was subsequently referred to palliative care. We recommend that there is a definite role for surgery in the management of solitary neuroendocrine carcinoma of unknown origin. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Methods in quantitative image analysis.

    PubMed

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB

  16. Noninvasive imaging of three-dimensional cardiac activation sequence during pacing and ventricular tachycardia.

    PubMed

    Han, Chengzong; Pogwizd, Steven M; Killingsworth, Cheryl R; He, Bin

    2011-08-01

    Imaging cardiac excitation within ventricular myocardium is important in the treatment of cardiac arrhythmias and might help improve our understanding of arrhythmia mechanisms. This study sought to rigorously assess the imaging performance of a 3-dimensional (3D) cardiac electrical imaging (3DCEI) technique with the aid of 3D intracardiac mapping from up to 216 intramural sites during paced rhythm and norepinephrine (NE)-induced ventricular tachycardia (VT) in the rabbit heart. Body surface potentials and intramural bipolar electrical recordings were simultaneously measured in a closed-chest condition in 13 healthy rabbits. Single-site pacing and dual-site pacing were performed from ventricular walls and septum. VTs and premature ventricular complexes (PVCs) were induced by intravenous NE. Computed tomography images were obtained to construct geometry models. The noninvasively imaged activation sequence correlated well with invasively measured counterpart, with a correlation coefficient of 0.72 ± 0.04, and a relative error of 0.30 ± 0.02 averaged over 520 paced beats as well as 73 NE-induced PVCs and VT beats. All PVCs and VT beats initiated in the subendocardium by a nonreentrant mechanism. The averaged distance from the imaged site of initial activation to the pacing site or site of arrhythmias determined from intracardiac mapping was ∼5 mm. For dual-site pacing, the double origins were identified when they were located at contralateral sides of ventricles or at the lateral wall and the apex. 3DCEI can noninvasively delineate important features of focal or multifocal ventricular excitation. It offers the potential to aid in localizing the origins and imaging activation sequences of ventricular arrhythmias, and to provide noninvasive assessment of the underlying arrhythmia mechanisms. Copyright © 2011 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  17. Noninvasive Imaging of Three-dimensional Cardiac Activation Sequence during Pacing and Ventricular Tachycardia

    PubMed Central

    Han, Chengzong; Pogwizd, Steven M.; Killingsworth, Cheryl R.; He, Bin

    2011-01-01

    Background Imaging cardiac excitation within ventricular myocardium is important in the treatment of cardiac arrhythmias and might help improve our understanding of arrhythmia mechanisms. Objective This study aims to rigorously assess the imaging performance of a three-dimensional (3-D) cardiac electrical imaging (3-DCEI) technique with the aid of 3-D intra-cardiac mapping from up to 216 intramural sites during paced rhythm and norepinephrine (NE) induced ventricular tachycardia (VT) in the rabbit heart. Methods Body surface potentials and intramural bipolar electrical recordings were simultaneously measured in a closed-chest condition in thirteen healthy rabbits. Single-site pacing and dual-site pacing were performed from ventricular walls and septum. VTs and premature ventricular complexes (PVCs) were induced by intravenous NE. Computer tomography images were obtained to construct geometry model. Results The non-invasively imaged activation sequence correlated well with invasively measured counterparts, with a correlation coefficient of 0.72±0.04, and a relative error of 0.30±0.02 averaged over 520 paced beats as well as 73 NE-induced PVCs and VT beats. All PVCs and VT beats initiated in the subendocardium by a nonreentrant mechanism. The averaged distance from imaged site of initial activation to pacing site or site of arrhythmias determined from intra-cardiac mapping was ~5mm. For dual-site pacing, the double origins were identified when they were located at contralateral sides of ventricles or at the lateral wall and the apex. Conclusion 3-DCEI can non-invasively delineate important features of focal or multi-focal ventricular excitation. It offers the potential to aid in localizing the origins and imaging activation sequence of ventricular arrhythmias, and to provide noninvasive assessment of the underlying arrhythmia mechanisms. PMID:21397046

  18. A study on the design and production of shadow puppets animation under the cultural background of "the Belt and Road" - Centering on the original work of shadow and image

    NASA Astrophysics Data System (ADS)

    Yang, Yuan; Jia, Yingxue; Yang, Hui

    2018-05-01

    "The Belt and Road" initiative is a national strategy of China to promote modernization construction. The Silk Road is not only a channel of business exchange, but also the artery in Sino-foreign cultural exchange. By promoting culture first, using China's shadow puppets animation production forms as a reference, using animation production techniques, depicting the two masters' pursuing of dharma to show the communication and exchange of Buddhist culture in ancient Silk Road, the original work seeks innovative expression methods in digital media production forms and animation processing and reveals the art and human style and features of China, Japan and India authentically.

  19. Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.

    PubMed

    Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam

    2017-01-13

    The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.

  20. Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents

    PubMed Central

    Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam

    2017-01-01

    The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797

Top