NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation
NASA Astrophysics Data System (ADS)
Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava
2015-12-01
In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.
Global and Local Features Based Classification for Bleed-Through Removal
NASA Astrophysics Data System (ADS)
Hu, Xiangyu; Lin, Hui; Li, Shutao; Sun, Bin
2016-12-01
The text on one side of historical documents often seeps through and appears on the other side, so the bleed-through is a common problem in historical document images. It makes the document images hard to read and the text difficult to recognize. To improve the image quality and readability, the bleed-through has to be removed. This paper proposes a global and local features extraction based bleed-through removal method. The Gaussian mixture model is used to get the global features of the images. Local features are extracted by the patch around each pixel. Then, the extreme learning machine classifier is utilized to classify the scanned images into the foreground text and the bleed-through component. Experimental results on real document image datasets show that the proposed method outperforms the state-of-the-art bleed-through removal methods and preserves the text strokes well.
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel
2016-03-01
Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.
NASA Astrophysics Data System (ADS)
Zargari, Abolfazl; Du, Yue; Thai, Theresa C.; Gunderson, Camille C.; Moore, Kathleen; Mannel, Robert S.; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2018-02-01
The objective of this study is to investigate the performance of global and local features to better estimate the characteristics of highly heterogeneous metastatic tumours, for accurately predicting the treatment effectiveness of the advanced stage ovarian cancer patients. In order to achieve this , a quantitative image analysis scheme was developed to estimate a total of 103 features from three different groups including shape and density, Wavelet, and Gray Level Difference Method (GLDM) features. Shape and density features are global features, which are directly applied on the entire target image; wavelet and GLDM features are local features, which are applied on the divided blocks of the target image. To assess the performance, the new scheme was applied on a retrospective dataset containing 120 recurrent and high grade ovary cancer patients. The results indicate that the three best performed features are skewness, root-mean-square (rms) and mean of local GLDM texture, indicating the importance of integrating local features. In addition, the averaged predicting performance are comparable among the three different categories. This investigation concluded that the local features contains at least as copious tumour heterogeneity information as the global features, which may be meaningful on improving the predicting performance of the quantitative image markers for the diagnosis and prognosis of ovary cancer patients.
Efficient and robust model-to-image alignment using 3D scale-invariant features.
Toews, Matthew; Wells, William M
2013-04-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.
Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features
Toews, Matthew; Wells, William M.
2013-01-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a-posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. PMID:23265799
An adaptive multi-feature segmentation model for infrared image
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa
2016-04-01
Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.
Automatic Sea Bird Detection from High Resolution Aerial Imagery
NASA Astrophysics Data System (ADS)
Mader, S.; Grenzdörffer, G. J.
2016-06-01
Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.
Facial expression recognition under partial occlusion based on fusion of global and local features
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji
2018-04-01
Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.
LDFT-based watermarking resilient to local desynchronization attacks.
Tian, Huawei; Zhao, Yao; Ni, Rongrong; Qin, Lunming; Li, Xuelong
2013-12-01
Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.
NASA Astrophysics Data System (ADS)
Ahmed, H. M.; Al-azawi, R. J.; Abdulhameed, A. A.
2018-05-01
Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.
Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone
NASA Astrophysics Data System (ADS)
Xia, G.; Hu, C.
2018-04-01
The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.
Detecting Image Splicing Using Merged Features in Chroma Space
Liu, Guangjie; Dai, Yuewei
2014-01-01
Image splicing is an image editing method to copy a part of an image and paste it onto another image, and it is commonly followed by postprocessing such as local/global blurring, compression, and resizing. To detect this kind of forgery, the image rich models, a feature set successfully used in the steganalysis is evaluated on the splicing image dataset at first, and the dominant submodel is selected as the first kind of feature. The selected feature and the DCT Markov features are used together to detect splicing forgery in the chroma channel, which is convinced effective in splicing detection. The experimental results indicate that the proposed method can detect splicing forgeries with lower error rate compared to the previous literature. PMID:24574877
Detecting image splicing using merged features in chroma space.
Xu, Bo; Liu, Guangjie; Dai, Yuewei
2014-01-01
Image splicing is an image editing method to copy a part of an image and paste it onto another image, and it is commonly followed by postprocessing such as local/global blurring, compression, and resizing. To detect this kind of forgery, the image rich models, a feature set successfully used in the steganalysis is evaluated on the splicing image dataset at first, and the dominant submodel is selected as the first kind of feature. The selected feature and the DCT Markov features are used together to detect splicing forgery in the chroma channel, which is convinced effective in splicing detection. The experimental results indicate that the proposed method can detect splicing forgeries with lower error rate compared to the previous literature.
Rahman, Zia Ur; Sethi, Pooja; Murtaza, Ghulam; Virk, Hafeez Ul Hassan; Rai, Aitzaz; Mahmod, Masliza; Schoondyke, Jeffrey; Albalbissi, Kais
2017-01-01
Cardiovascular disease is a leading cause of morbidity and mortality globally. Early diagnostic markers are gaining popularity for better patient care disease outcomes. There is an increasing interest in noninvasive cardiac imaging biomarkers to diagnose subclinical cardiac disease. Feature tracking cardiac magnetic resonance imaging is a novel post-processing technique that is increasingly being employed to assess global and regional myocardial function. This technique has numerous applications in structural and functional diagnostics. It has been validated in multiple studies, although there is still a long way to go for it to become routine standard of care. PMID:28515849
Reduction of false-positive recalls using a computerized mammographic image feature analysis scheme
NASA Astrophysics Data System (ADS)
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-08-01
The high false-positive recall rate is one of the major dilemmas that significantly reduce the efficacy of screening mammography, which harms a large fraction of women and increases healthcare cost. This study aims to investigate the feasibility of helping reduce false-positive recalls by developing a new computer-aided diagnosis (CAD) scheme based on the analysis of global mammographic texture and density features computed from four-view images. Our database includes full-field digital mammography (FFDM) images acquired from 1052 recalled women (669 positive for cancer and 383 benign). Each case has four images: two craniocaudal (CC) and two mediolateral oblique (MLO) views. Our CAD scheme first computed global texture features related to the mammographic density distribution on the segmented breast regions of four images. Second, the computed features were given to two artificial neural network (ANN) classifiers that were separately trained and tested in a ten-fold cross-validation scheme on CC and MLO view images, respectively. Finally, two ANN classification scores were combined using a new adaptive scoring fusion method that automatically determined the optimal weights to assign to both views. CAD performance was tested using the area under a receiver operating characteristic curve (AUC). The AUC = 0.793 ± 0.026 was obtained for this four-view CAD scheme, which was significantly higher at the 5% significance level than the AUCs achieved when using only CC (p = 0.025) or MLO (p = 0.0004) view images, respectively. This study demonstrates that a quantitative assessment of global mammographic image texture and density features could provide useful and/or supplementary information to classify between malignant and benign cases among the recalled cases, which may eventually help reduce the false-positive recall rate in screening mammography.
Jiang, Weiping; Wang, Li; Niu, Xiaoji; Zhang, Quan; Zhang, Hui; Tang, Min; Hu, Xiangyun
2014-01-01
A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference. PMID:25330046
NASA Astrophysics Data System (ADS)
Li, Yane; Fan, Ming; Cheng, Hu; Zhang, Peng; Zheng, Bin; Li, Lihua
2018-01-01
This study aims to develop and test a new imaging marker-based short-term breast cancer risk prediction model. An age-matched dataset of 566 screening mammography cases was used. All ‘prior’ images acquired in the two screening series were negative, while in the ‘current’ screening images, 283 cases were positive for cancer and 283 cases remained negative. For each case, two bilateral cranio-caudal view mammograms acquired from the ‘prior’ negative screenings were selected and processed by a computer-aided image processing scheme, which segmented the entire breast area into nine strip-based local regions, extracted the element regions using difference of Gaussian filters, and computed both global- and local-based bilateral asymmetrical image features. An initial feature pool included 190 features related to the spatial distribution and structural similarity of grayscale values, as well as of the magnitude and phase responses of multidirectional Gabor filters. Next, a short-term breast cancer risk prediction model based on a generalized linear model was built using an embedded stepwise regression analysis method to select features and a leave-one-case-out cross-validation method to predict the likelihood of each woman having image-detectable cancer in the next sequential mammography screening. The area under the receiver operating characteristic curve (AUC) values significantly increased from 0.5863 ± 0.0237 to 0.6870 ± 0.0220 when the model trained by the image features extracted from the global regions and by the features extracted from both the global and the matched local regions (p = 0.0001). The odds ratio values monotonically increased from 1.00-8.11 with a significantly increasing trend in slope (p = 0.0028) as the model-generated risk score increased. In addition, the AUC values were 0.6555 ± 0.0437, 0.6958 ± 0.0290, and 0.7054 ± 0.0529 for the three age groups of 37-49, 50-65, and 66-87 years old, respectively. AUC values of 0.6529 ± 0.1100, 0.6820 ± 0.0353, 0.6836 ± 0.0302 and 0.8043 ± 0.1067 were yielded for the four mammography density sub-groups (BIRADS from 1-4), respectively. This study demonstrated that bilateral asymmetry features extracted from local regions combined with the global region in bilateral negative mammograms could be used as a new imaging marker to assist in the prediction of short-term breast cancer risk.
A new approach to develop computer-aided detection schemes of digital mammograms
NASA Astrophysics Data System (ADS)
Tan, Maxine; Qian, Wei; Pu, Jiantao; Liu, Hong; Zheng, Bin
2015-06-01
The purpose of this study is to develop a new global mammographic image feature analysis based computer-aided detection (CAD) scheme and evaluate its performance in detecting positive screening mammography examinations. A dataset that includes images acquired from 1896 full-field digital mammography (FFDM) screening examinations was used in this study. Among them, 812 cases were positive for cancer and 1084 were negative or benign. After segmenting the breast area, a computerized scheme was applied to compute 92 global mammographic tissue density based features on each of four mammograms of the craniocaudal (CC) and mediolateral oblique (MLO) views. After adding three existing popular risk factors (woman’s age, subjectively rated mammographic density, and family breast cancer history) into the initial feature pool, we applied a sequential forward floating selection feature selection algorithm to select relevant features from the bilateral CC and MLO view images separately. The selected CC and MLO view image features were used to train two artificial neural networks (ANNs). The results were then fused by a third ANN to build a two-stage classifier to predict the likelihood of the FFDM screening examination being positive. CAD performance was tested using a ten-fold cross-validation method. The computed area under the receiver operating characteristic curve was AUC = 0.779 ± 0.025 and the odds ratio monotonically increased from 1 to 31.55 as CAD-generated detection scores increased. The study demonstrated that this new global image feature based CAD scheme had a relatively higher discriminatory power to cue the FFDM examinations with high risk of being positive, which may provide a new CAD-cueing method to assist radiologists in reading and interpreting screening mammograms.
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Xu, Jun; Madabhushi, Anant
2015-01-01
Nuclear architecture or the spatial arrangement of individual cancer nuclei on histopathology images has been shown to be associated with different grades and differential risk for a number of solid tumors such as breast, prostate, and oropharyngeal. Graph-based representations of individual nuclei (nuclei representing the graph nodes) allows for mining of quantitative metrics to describe tumor morphology. These graph features can be broadly categorized into global and local depending on the type of graph construction method. While a number of local graph (e.g. Cell Cluster Graphs) and global graph (e.g. Voronoi, Delaunay Triangulation, Minimum Spanning Tree) features have been shown to associated with cancer grade, risk, and outcome for different cancer types, the sensitivity of the preceding segmentation algorithms in identifying individual nuclei can have a significant bearing on the discriminability of the resultant features. This therefore begs the question as to which features while being discriminative of cancer grade and aggressiveness are also the most resilient to the segmentation errors. These properties are particularly desirable in the context of digital pathology images, where the method of slide preparation, staining, and type of nuclear segmentation algorithm employed can all dramatically affect the quality of the nuclear graphs and corresponding features. In this paper we evaluated the trade off between discriminability and stability of both global and local graph-based features in conjunction with a few different segmentation algorithms and in the context of two different histopathology image datasets of breast cancer from whole-slide images (WSI) and tissue microarrays (TMA). Specifically in this paper we investigate a few different performance measures including stability, discriminability and stability vs discriminability trade off, all of which are based on p-values from the Kruskal-Wallis one-way analysis of variance for local and global graph features. Apart from identifying the set of local and global features that satisfied the trade off between stability and discriminability, our most interesting finding was that a simple segmentation method was sufficient to identify the most discriminant features for invasive tumour detection in TMAs, whereas for tumour grading in WSI, the graph based features were more sensitive to the accuracy of the segmentation algorithm employed.
A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis
Rahman, M. M.; Antani, S. K.; Thoma, G. R.
2011-01-01
We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350
Prati, Giulio; Vitrella, Giancarlo; Allocca, Giuseppe; Muser, Daniele; Buttignoni, Sonja Cukon; Piccoli, Gianluca; Morocutti, Giorgio; Delise, Pietro; Pinamonti, Bruno; Proclemer, Alessandro; Sinagra, Gianfranco; Nucifora, Gaetano
2015-11-01
Analysis of right ventricular (RV) regional dysfunction by cardiac magnetic resonance (CMR) imaging in arrhythmogenic RV cardiomyopathy (ARVC) may be inadequate because of the complex contraction pattern of the RV. Aim of this study was to determine the use of RV strain and dyssynchrony assessment in ARVC using feature-tracking CMR analysis. Thirty-two consecutive patients with ARVC referred to CMR imaging were included. Thirty-two patients with idiopathic RV outflow tract arrhythmias and 32 control subjects, matched for age and sex to the ARVC group, were included for comparison purpose. CMR imaging was performed to assess biventricular function; feature-tracking analysis was applied to the cine CMR images to assess regional and global longitudinal, circumferential, and radial RV strains and RV dyssynchrony (defined as the SD of the time-to-peak strain of the RV segments). RV global longitudinal strain (-17±5% versus -26±6% versus -29±6%; P<0.001), global circumferential strain (-9±4% versus -12±4% versus -13±5%; P=0.001), and global radial strain (18 [12-26]% versus 22 [15-32]% versus 27 [20-39]%; P=0.015) were significantly lower and SD of the time-to-peak RV strain in all 3 directions were significantly higher among patients with ARVC compared with patients with RV outflow tract arrhythmias and controls. RV global longitudinal strain >-23.2%, SD of the time-to-peak RV longitudinal strain >113.1 ms, and SD of the time-to-peak RV circumferential strain >177.1 ms allowed correct identification of 88%, 75%, and 63% of ARVC patients with no or only minor CMR criteria for ARVC diagnosis. Strain analysis by feature-tracking CMR helps to objectively quantify global and regional RV dysfunction and RV dyssynchrony in patients with ARVC and provides incremental value over conventional cine CMR imaging. © 2015 American Heart Association, Inc.
Secure image retrieval with multiple keys
NASA Astrophysics Data System (ADS)
Liang, Haihua; Zhang, Xinpeng; Wei, Qiuhan; Cheng, Hang
2018-03-01
This article proposes a secure image retrieval scheme under a multiuser scenario. In this scheme, the owner first encrypts and uploads images and their corresponding features to the cloud; then, the user submits the encrypted feature of the query image to the cloud; next, the cloud compares the encrypted features and returns encrypted images with similar content to the user. To find the nearest neighbor in the encrypted features, an encryption with multiple keys is proposed, in which the query feature of each user is encrypted by his/her own key. To improve the key security and space utilization, global optimization and Gaussian distribution are, respectively, employed to generate multiple keys. The experiments show that the proposed encryption can provide effective and secure image retrieval for each user and ensure confidentiality of the query feature of each user.
Content based image retrieval using local binary pattern operator and data mining techniques.
Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan
2015-01-01
Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.
Feature-based Alignment of Volumetric Multi-modal Images
Toews, Matthew; Zöllei, Lilla; Wells, William M.
2014-01-01
This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Gong, Zhijun; Ye, Chun; Li, Yongqiang; Liang, Cheng
2007-06-01
As an important sub-system in intelligent transportation system (ITS), the detection and recognition of traffic signs from mobile images is becoming one of the hot spots in the international research field of ITS. Considering the problem of traffic sign automatic detection in motion images, a new self-adaptive algorithm for traffic sign detection based on color and shape features is proposed in this paper. Firstly, global statistical color features of different images are computed based on statistics theory. Secondly, some self-adaptive thresholds and special segmentation rules for image segmentation are designed according to these global color features. Then, for red, yellow and blue traffic signs, the color image is segmented to three binary images by these thresholds and rules. Thirdly, if the number of white pixels in the segmented binary image exceeds the filtering threshold, the binary image should be further filtered. Fourthly, the method of gray-value projection is used to confirm top, bottom, left and right boundaries for candidate regions of traffic signs in the segmented binary image. Lastly, if the shape feature of candidate region satisfies the need of real traffic sign, this candidate region is confirmed as the detected traffic sign region. The new algorithm is applied to actual motion images of natural scenes taken by a CCD camera of the mobile photogrammetry system in Nanjing at different time. The experimental results show that the algorithm is not only simple, robust and more adaptive to natural scene images, but also reliable and high-speed on real traffic sign detection.
Imaging of Venus from Galileo: Early results and camera performance
Belton, M.J.S.; Gierasch, P.; Klaasen, K.P.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Greenberg, R.; Head, J.W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Fanale, F.P.; Ingersoll, A.P.; Pollock, J.B.; Morrison, D.; Clary, M.C.; Cunningham, W.; Breneman, H.
1992-01-01
Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A??, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3%, is different from that recorded at short wavelengths. In particular, the "polar collar," which is omnipresent in short wavelength images, is absent at 9900 A??. The maximum contrast in the features at 4200 A?? is about 20%. The optical performance of the camera is described and is judged to be nominal. ?? 1992.
Saliency Detection for Stereoscopic 3D Images in the Quaternion Frequency Domain
NASA Astrophysics Data System (ADS)
Cai, Xingyu; Zhou, Wujie; Cen, Gang; Qiu, Weiwei
2018-06-01
Recent studies have shown that a remarkable distinction exists between human binocular and monocular viewing behaviors. Compared with two-dimensional (2D) saliency detection models, stereoscopic three-dimensional (S3D) image saliency detection is a more challenging task. In this paper, we propose a saliency detection model for S3D images. The final saliency map of this model is constructed from the local quaternion Fourier transform (QFT) sparse feature and global QFT log-Gabor feature. More specifically, the local QFT feature measures the saliency map of an S3D image by analyzing the location of a similar patch. The similar patch is chosen using a sparse representation method. The global saliency map is generated by applying the wake edge-enhanced gradient QFT map through a band-pass filter. The results of experiments on two public datasets show that the proposed model outperforms existing computational saliency models for estimating S3D image saliency.
Color Image Segmentation Based on Statistics of Location and Feature Similarity
NASA Astrophysics Data System (ADS)
Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi
The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.
Beyer, Ross A.; Nimmo, Francis; McKinnon, William B.; Moore, Jeffrey M.; Binzel, Richard P.; Conrad, Jack W.; Cheng, Andy; Ennico, K.; Lauer, Tod R.; Olkin, C.B.; Robbins, Stuart; Schenk, Paul; Singer, Kelsi; Spencer, John R.; Stern, S. Alan; Weaver, H.A.; Young, L.A.; Zangari, Amanda M.
2017-01-01
New Horizons images of Pluto’s companion Charon show a variety of terrains that display extensional tectonic features, with relief surprising for this relatively small world. These features suggest a global extensional areal strain of order 1% early in Charon’s history. Such extension is consistent with the presence of an ancient global ocean, now frozen. PMID:28919640
An image understanding system using attributed symbolic representation and inexact graph-matching
NASA Astrophysics Data System (ADS)
Eshera, M. A.; Fu, K.-S.
1986-09-01
A powerful image understanding system using a semantic-syntactic representation scheme consisting of attributed relational graphs (ARGs) is proposed for the analysis of the global information content of images. A multilayer graph transducer scheme performs the extraction of ARG representations from images, with ARG nodes representing the global image features, and the relations between features represented by the attributed branches between corresponding nodes. An efficient dynamic programming technique is employed to derive the distance between two ARGs and the inexact matching of their respective components. Noise, distortion and ambiguity in real-world images are handled through modeling in the transducer mapping rules and through the appropriate cost of error-transformation for the inexact matching of the representation. The system is demonstrated for the case of locating objects in a scene composed of complex overlapped objects, and the case of target detection in noisy and distorted synthetic aperture radar image.
Multiple Hypotheses Image Segmentation and Classification With Application to Dietary Assessment
Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.
2016-01-01
We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier’s confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback. PMID:25561457
Multiple hypotheses image segmentation and classification with application to dietary assessment.
Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J; Delp, Edward J
2015-01-01
We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier's confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback.
NASA Astrophysics Data System (ADS)
Tang, Chaoqing; Tian, Gui Yun; Chen, Xiaotian; Wu, Jianbo; Li, Kongjing; Meng, Hongying
2017-12-01
Active thermography provides infrared images that contain sub-surface defect information, while visible images only reveal surface information. Mapping infrared information to visible images offers more comprehensive visualization for decision-making in rail inspection. However, the common information for registration is limited due to different modalities in both local and global level. For example, rail track which has low temperature contrast reveals rich details in visible images, but turns blurry in the infrared counterparts. This paper proposes a registration algorithm called Edge-Guided Speeded-Up-Robust-Features (EG-SURF) to address this issue. Rather than sequentially integrating local and global information in matching stage which suffered from buckets effect, this algorithm adaptively integrates local and global information into a descriptor to gather more common information before matching. This adaptability consists of two facets, an adaptable weighting factor between local and global information, and an adaptable main direction accuracy. The local information is extracted using SURF while the global information is represented by shape context from edges. Meanwhile, in shape context generation process, edges are weighted according to local scale and decomposed into bins using a vector decomposition manner to provide more accurate descriptor. The proposed algorithm is qualitatively and quantitatively validated using eddy current pulsed thermography scene in the experiments. In comparison with other algorithms, better performance has been achieved.
Texture-based approach to palmprint retrieval for personal identification
NASA Astrophysics Data System (ADS)
Li, Wenxin; Zhang, David; Xu, Z.; You, J.
2000-12-01
This paper presents a new approach to palmprint retrieval for personal identification. Three key issues in image retrieval are considered - feature selection, similarity measures and dynamic search for the best matching of the sample in the image database. We propose a texture-based method for palmprint feature representation. The concept of texture energy is introduced to define a palm print's global and local features, which are characterized with high convergence of inner-palm similarities and good dispersion of inter-palm discrimination. The search is carried out in a layered fashion: first global features are used to guide the fast selection of a small set of similar candidates from the database from the database and then local features are used to decide the final output within the candidate set. The experimental results demonstrate the effectiveness and accuracy of the proposed method.
Texture-based approach to palmprint retrieval for personal identification
NASA Astrophysics Data System (ADS)
Li, Wenxin; Zhang, David; Xu, Z.; You, J.
2001-01-01
This paper presents a new approach to palmprint retrieval for personal identification. Three key issues in image retrieval are considered - feature selection, similarity measures and dynamic search for the best matching of the sample in the image database. We propose a texture-based method for palmprint feature representation. The concept of texture energy is introduced to define a palm print's global and local features, which are characterized with high convergence of inner-palm similarities and good dispersion of inter-palm discrimination. The search is carried out in a layered fashion: first global features are used to guide the fast selection of a small set of similar candidates from the database from the database and then local features are used to decide the final output within the candidate set. The experimental results demonstrate the effectiveness and accuracy of the proposed method.
Semantic image segmentation with fused CNN features
NASA Astrophysics Data System (ADS)
Geng, Hui-qiang; Zhang, Hua; Xue, Yan-bing; Zhou, Mian; Xu, Guang-ping; Gao, Zan
2017-09-01
Semantic image segmentation is a task to predict a category label for every image pixel. The key challenge of it is to design a strong feature representation. In this paper, we fuse the hierarchical convolutional neural network (CNN) features and the region-based features as the feature representation. The hierarchical features contain more global information, while the region-based features contain more local information. The combination of these two kinds of features significantly enhances the feature representation. Then the fused features are used to train a softmax classifier to produce per-pixel label assignment probability. And a fully connected conditional random field (CRF) is used as a post-processing method to improve the labeling consistency. We conduct experiments on SIFT flow dataset. The pixel accuracy and class accuracy are 84.4% and 34.86%, respectively.
A global/local affinity graph for image segmentation.
Xiaofang Wang; Yuxing Tang; Masnou, Simon; Liming Chen
2015-04-01
Construction of a reliable graph capturing perceptual grouping cues of an image is fundamental for graph-cut based image segmentation methods. In this paper, we propose a novel sparse global/local affinity graph over superpixels of an input image to capture both short- and long-range grouping cues, and thereby enabling perceptual grouping laws, including proximity, similarity, continuity, and to enter in action through a suitable graph-cut algorithm. Moreover, we also evaluate three major visual features, namely, color, texture, and shape, for their effectiveness in perceptual segmentation and propose a simple graph fusion scheme to implement some recent findings from psychophysics, which suggest combining these visual features with different emphases for perceptual grouping. In particular, an input image is first oversegmented into superpixels at different scales. We postulate a gravitation law based on empirical observations and divide superpixels adaptively into small-, medium-, and large-sized sets. Global grouping is achieved using medium-sized superpixels through a sparse representation of superpixels' features by solving a ℓ0-minimization problem, and thereby enabling continuity or propagation of local smoothness over long-range connections. Small- and large-sized superpixels are then used to achieve local smoothness through an adjacent graph in a given feature space, and thus implementing perceptual laws, for example, similarity and proximity. Finally, a bipartite graph is also introduced to enable propagation of grouping cues between superpixels of different scales. Extensive experiments are carried out on the Berkeley segmentation database in comparison with several state-of-the-art graph constructions. The results show the effectiveness of the proposed approach, which outperforms state-of-the-art graphs using four different objective criteria, namely, the probabilistic rand index, the variation of information, the global consistency error, and the boundary displacement error.
High resolution satellite image indexing and retrieval using SURF features and bag of visual words
NASA Astrophysics Data System (ADS)
Bouteldja, Samia; Kourgli, Assia
2017-03-01
In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.
Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch
2011-01-01
In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.
Image-optimized Coronal Magnetic Field Models
NASA Astrophysics Data System (ADS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-08-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.
Lossless Compression of JPEG Coded Photo Collections.
Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng
2016-04-06
The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.
Image-Optimized Coronal Magnetic Field Models
NASA Technical Reports Server (NTRS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-01-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.
Hierarchical ensemble of global and local classifiers for face recognition.
Su, Yu; Shan, Shiguang; Chen, Xilin; Gao, Wen
2009-08-01
In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.
Constraint-based stereo matching
NASA Technical Reports Server (NTRS)
Kuan, D. T.
1987-01-01
The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.
Registration of opthalmic images using control points
NASA Astrophysics Data System (ADS)
Heneghan, Conor; Maguire, Paul
2003-03-01
A method for registering pairs of digital ophthalmic images of the retina is presented using anatomical features as control points present in both images. The anatomical features chosen are blood vessel crossings and bifurcations. These control points are identified by a combination of local contrast enhancement, and morphological processing. In general, the matching between control points is unknown, however, so an automated algorithm is used to determine the matching pairs of control points in the two images as follows. Using two control points from each image, rigid global transform (RGT) coefficients are calculated for all possible combinations of control point pairs, and the set of RGT coefficients is identified. Once control point pairs are established, registration of two images can be achieved by using linear regression to optimize an RGT, bilinear or second order polynomial global transform. An example of cross-modal image registration using an optical image and a fluorescein angiogram of an eye is presented to illustrate the technique.
Online 3D Ear Recognition by Combining Global and Local Features.
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.
Online 3D Ear Recognition by Combining Global and Local Features
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955
Global Plasmaspheric Imaging: A New "Light" Focusing on Familiar Questions
NASA Technical Reports Server (NTRS)
Adrian, M. L.; Six, N. Frank (Technical Monitor)
2002-01-01
Until recently plasmaspheric physics, for that matter, magnetospheric physics as a whole, has relied primarily on single point in-situ measurement, theory, modeling, and a considerable amount of extrapolation in order to envision the global structure of the plasmasphere. This condition changed with the launch of the IMAGE satellite in March 2000. Using the Extreme Ultraviolet (EUV) imager on WAGE, we can now view the global structure of the plasmasphere bathed in the glow of resonantly scattered 30.4 nm radiation allowing the space physics community to view the dynamics of this global structure as never before. This talk will: (1) define the plasmasphere from the perspective of plasmaspheric physics prior to March 2000; (2) present a review of EUV imaging optics and the IMAGE mission; and focus on efforts to understand an old and familiar feature of plasmaspheric physics, embedded plasmaspheric density troughs, in this new global light with the assistance of forward modeling.
Medical image retrieval system using multiple features from 3D ROIs
NASA Astrophysics Data System (ADS)
Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming
2012-02-01
Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.
Traffic sign recognition based on a context-aware scale-invariant feature transform approach
NASA Astrophysics Data System (ADS)
Yuan, Xue; Hao, Xiaoli; Chen, Houjin; Wei, Xueye
2013-10-01
A new context-aware scale-invariant feature transform (CASIFT) approach is proposed, which is designed for the use in traffic sign recognition (TSR) systems. The following issues remain in previous works in which SIFT is used for matching or recognition: (1) SIFT is unable to provide color information; (2) SIFT only focuses on local features while ignoring the distribution of global shapes; (3) the template with the maximum number of matching points selected as the final result is instable, especially for images with simple patterns; and (4) SIFT is liable to result in errors when different images share the same local features. In order to resolve these problems, a new CASIFT approach is proposed. The contributions of the work are as follows: (1) color angular patterns are used to provide the color distinguishing information; (2) a CASIFT which effectively combines local and global information is proposed; and (3) a method for computing the similarity between two images is proposed, which focuses on the distribution of the matching points, rather than using the traditional SIFT approach of selecting the template with maximum number of matching points as the final result. The proposed approach is particularly effective in dealing with traffic signs which have rich colors and varied global shape distribution. Experiments are performed to validate the effectiveness of the proposed approach in TSR systems, and the experimental results are satisfying even for images containing traffic signs that have been rotated, damaged, altered in color, have undergone affine transformations, or images which were photographed under different weather or illumination conditions.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
Dynamic feature analysis for Voyager at the Image Processing Laboratory
NASA Technical Reports Server (NTRS)
Yagi, G. M.; Lorre, J. J.; Jepsen, P. L.
1978-01-01
Voyager 1 and 2 were launched from Cape Kennedy to Jupiter, Saturn, and beyond on September 5, 1977 and August 20, 1977. The role of the Image Processing Laboratory is to provide the Voyager Imaging Team with the necessary support to identify atmospheric features (tiepoints) for Jupiter and Saturn data, and to analyze and display them in a suitable form. This support includes the software needed to acquire and store tiepoints, the hardware needed to interactively display images and tiepoints, and the general image processing environment necessary for decalibration and enhancement of the input images. The objective is an understanding of global circulation in the atmospheres of Jupiter and Saturn. Attention is given to the Voyager imaging subsystem, the Voyager imaging science objectives, hardware, software, display monitors, a dynamic feature study, decalibration, navigation, and data base.
Image-optimized Coronal Magnetic Field Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outsidemore » of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.« less
Comparison of k-means related clustering methods for nuclear medicine images segmentation
NASA Astrophysics Data System (ADS)
Borys, Damian; Bzowski, Pawel; Danch-Wierzchowska, Marta; Psiuk-Maksymowicz, Krzysztof
2017-03-01
In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.
Yang, Fan; Xu, Ying-Ying; Shen, Hong-Bin
2014-01-01
Human protein subcellular location prediction can provide critical knowledge for understanding a protein's function. Since significant progress has been made on digital microscopy, automated image-based protein subcellular location classification is urgently needed. In this paper, we aim to investigate more representative image features that can be effectively used for dealing with the multilabel subcellular image samples. We prepared a large multilabel immunohistochemistry (IHC) image benchmark from the Human Protein Atlas database and tested the performance of different local texture features, including completed local binary pattern, local tetra pattern, and the standard local binary pattern feature. According to our experimental results from binary relevance multilabel machine learning models, the completed local binary pattern, and local tetra pattern are more discriminative for describing IHC images when compared to the traditional local binary pattern descriptor. The combination of these two novel local pattern features and the conventional global texture features is also studied. The enhanced performance of final binary relevance classification model trained on the combined feature space demonstrates that different features are complementary to each other and thus capable of improving the accuracy of classification.
Global-local feature attention network with reranking strategy for image caption generation
NASA Astrophysics Data System (ADS)
Wu, Jie; Xie, Si-ya; Shi, Xin-bao; Chen, Yao-wen
2017-11-01
In this paper, a novel framework, named as global-local feature attention network with reranking strategy (GLAN-RS), is presented for image captioning task. Rather than only adopting unitary visual information in the classical models, GLAN-RS explores the attention mechanism to capture local convolutional salient image maps. Furthermore, we adopt reranking strategy to adjust the priority of the candidate captions and select the best one. The proposed model is verified using the Microsoft Common Objects in Context (MSCOCO) benchmark dataset across seven standard evaluation metrics. Experimental results show that GLAN-RS significantly outperforms the state-of-the-art approaches, such as multimodal recurrent neural network (MRNN) and Google NIC, which gets an improvement of 20% in terms of BLEU4 score and 13 points in terms of CIDER score.
Kalpathy-Cramer, Jayashree; Hersh, William
2008-01-01
In 2006 and 2007, Oregon Health & Science University (OHSU) participated in the automatic image annotation task for medical images at ImageCLEF, an annual international benchmarking event that is part of the Cross Language Evaluation Forum (CLEF). The goal of the automatic annotation task was to classify 1000 test images based on the Image Retrieval in Medical Applications (IRMA) code, given a set of 10,000 training images. There were 116 distinct classes in 2006 and 2007. We evaluated the efficacy of a variety of primarily global features for this classification task. These included features based on histograms, gray level correlation matrices and the gist technique. A multitude of classifiers including k-nearest neighbors, two-level neural networks, support vector machines, and maximum likelihood classifiers were evaluated. Our official error rates for the 1000 test images were 26% in 2006 using the flat classification structure. The error count in 2007 was 67.8 using the hierarchical classification error computation based on the IRMA code in 2007. Confusion matrices as well as clustering experiments were used to identify visually similar classes. The use of the IRMA code did not help us in the classification task as the semantic hierarchy of the IRMA classes did not correspond well with the hierarchy based on clustering of image features that we used. Our most frequent misclassification errors were along the view axis. Subsequent experiments based on a two-stage classification system decreased our error rate to 19.8% for the 2006 dataset and our error count to 55.4 for the 2007 data. PMID:19884953
Go_LIVE! - Global Near-real-time Land Ice Velocity data from Landsat 8 at NSIDC
NASA Astrophysics Data System (ADS)
Klinger, M. J.; Fahnestock, M. A.; Scambos, T. A.; Gardner, A. S.; Haran, T. M.; Moon, T. A.; Hulbe, C. L.; Berthier, E.
2016-12-01
The National Snow and Ice Data Center (NSIDC) is developing a processing and staging system under NASA funding for near-real-time global ice velocity data derived from Landsat 8 panchromatic imagery: Global Land Ice Velocity Extraction from Landsat (Go_LIVE). The system performs repeat image feature tracking using newly developed Python Correlation (PyCorr) software applied to image pairs covering all glaciers > 5km2 as well as both ice sheets. We correlate each Landsat 8 path-row image with matching path-row images acquired within the previous 400 days. Real-Time (RT) panchromatic Landsat 8 L1T images have geolocation accuracy of 5 meters and high radiometric sensitivity (12-bit), allowing for feature matching over low-contrast snow and ice surfaces. High-pass filters are applied to the imagery to enhance local surface texture and improve correlation returns. Despite the excellent geolocation accuracy of Landsat 8, the remaining error introduces an artificial offset in the velocity returns. To correct this error, we apply a shift to the x and y grids to bring the displacement field to zero over known stationary features such as bedrock. For ice sheet interiors where stationary features do not exist, we use near-zero (<10 ma-1) or slow-moving ice areas (10-25 ma-1) to refine velocities. Go_LIVE will eventually include Landsat 7, 5 and 4 imagery as well. Go_LIVE runs on the University of Colorado's supercomputer and Peta Library storage system to process 10,000 image pairs per hour. We are currently developing a web-based data access site at NSIDC. The data are provided in NetCDF (Network Common Data Format) as geolocated grids of x and y velocity components at 300 m spacing with accompanying error and quality parameters. Extensive data sets currently exist for Alaskan, Antarctic, and Greenlandic ice areas, and are available upon request to NSIDC. Go_LIVE's goal for 2017 is a system that updates global ice velocity at few-day or shorter latency.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
A robust method for estimating motorbike count based on visual information learning
NASA Astrophysics Data System (ADS)
Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko
2015-03-01
Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Lu, Guolan; Sechopoulos, Ioannis; Fei, Baowei
2014-03-01
Digital breast tomosynthesis (DBT) is a pseudo-three-dimensional x-ray imaging modality proposed to decrease the effect of tissue superposition present in mammography, potentially resulting in an increase in clinical performance for the detection and diagnosis of breast cancer. Tissue classification in DBT images can be useful in risk assessment, computer-aided detection and radiation dosimetry, among other aspects. However, classifying breast tissue in DBT is a challenging problem because DBT images include complicated structures, image noise, and out-of-plane artifacts due to limited angular tomographic sampling. In this project, we propose an automatic method to classify fatty and glandular tissue in DBT images. First, the DBT images are pre-processed to enhance the tissue structures and to decrease image noise and artifacts. Second, a global smooth filter based on L0 gradient minimization is applied to eliminate detailed structures and enhance large-scale ones. Third, the similar structure regions are extracted and labeled by fuzzy C-means (FCM) classification. At the same time, the texture features are also calculated. Finally, each region is classified into different tissue types based on both intensity and texture features. The proposed method is validated using five patient DBT images using manual segmentation as the gold standard. The Dice scores and the confusion matrix are utilized to evaluate the classified results. The evaluation results demonstrated the feasibility of the proposed method for classifying breast glandular and fat tissue on DBT images.
Laser interference effect evaluation method based on character of laser-spot and image feature
NASA Astrophysics Data System (ADS)
Tang, Jianfeng; Luo, Xiaolin; Wu, Lingxia
2016-10-01
Evaluating the laser interference effect to CCD objectively and accurately has great research value. Starting from the change of the image's feature before and after interference, meanwhile, considering the influence of the laser-spot distribution character on the masking degree of the image feature information, a laser interference effect evaluation method based on character of laser-spot and image feature was proposed. It reflected the laser-spot distribution character using the distance between the center of the laser-spot and center of the target. It reflected the change of the global image feature using the changes of image's sparse coefficient matrix, which was obtained by the SSIM-inspired orthogonal matching pursuit (OMP) sparse coding algorithm. What's more, the assessment method reflected the change of the local image feature using the changes of the image's edge sharpness, which could be obtained by the change of the image's gradient magnitude. Taken together, the laser interference effect can be evaluated accurately. In terms of the laser interference experiment results, the proposed method shows good rationality and feasibility under the disturbing condition of different laser powers, and it can also overcome the inaccuracy caused by the change of the laser-spot position, realizing the evaluation of the laser interference effect objectively and accurately.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Annotated image of Tharsis Limb Cloud 7 September 2005 This composite of red and blue Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) daily global images acquired on 6 July 2005 shows an isolated water ice cloud extending more than 30 kilometers (more than 18 miles) above the martian surface. Clouds such as this are common in late spring over the terrain located southwest of the Arsia Mons volcano. Arsia Mons is the dark, oval feature near the limb, just to the left of the 'T' in the 'Tharsis Montes' label. The dark, nearly circular feature above the 'S' in 'Tharsis' is the volcano, Pavonis Mons, and the other dark circular feature, above and to the right of 's' in 'Montes,' is Ascraeus Mons. Illumination is from the left/lower left. Season: Northern Autumn/Southern SpringLine fitting based feature extraction for object recognition
NASA Astrophysics Data System (ADS)
Li, Bing
2014-06-01
Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2015-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.
Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2017-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329
NASA Astrophysics Data System (ADS)
Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.
2001-01-01
Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.
NASA Astrophysics Data System (ADS)
Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.
2000-12-01
Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.
An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors
Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai
2017-01-01
RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553
Rotation invariant features for wear particle classification
NASA Astrophysics Data System (ADS)
Arof, Hamzah; Deravi, Farzin
1997-09-01
This paper investigates the ability of a set of rotation invariant features to classify images of wear particles found in used lubricating oil of machinery. The rotation invariant attribute of the features is derived from the property of the magnitudes of Fourier transform coefficients that do not change with spatial shift of the input elements. By analyzing individual circular neighborhoods centered at every pixel in an image, local and global texture characteristics of an image can be described. A number of input sequences are formed by the intensities of pixels on concentric rings of various radii measured from the center of each neighborhood. Fourier transforming the sequences would generate coefficients whose magnitudes are invariant to rotation. Rotation invariant features extracted from these coefficients were utilized to classify wear particle images that were obtained from a number of different particles captured at different orientations. In an experiment involving images of 6 classes, the circular neighborhood features obtained a 91% recognition rate which compares favorably to a 76% rate achieved by features of a 6 by 6 co-occurrence matrix.
Insights into multimodal imaging classification of ADHD
Colby, John B.; Rudie, Jeffrey D.; Brown, Jesse A.; Douglas, Pamela K.; Cohen, Mark S.; Shehzad, Zarrar
2012-01-01
Attention deficit hyperactivity disorder (ADHD) currently is diagnosed in children by clinicians via subjective ADHD-specific behavioral instruments and by reports from the parents and teachers. Considering its high prevalence and large economic and societal costs, a quantitative tool that aids in diagnosis by characterizing underlying neurobiology would be extremely valuable. This provided motivation for the ADHD-200 machine learning (ML) competition, a multisite collaborative effort to investigate imaging classifiers for ADHD. Here we present our ML approach, which used structural and functional magnetic resonance imaging data, combined with demographic information, to predict diagnostic status of individuals with ADHD from typically developing (TD) children across eight different research sites. Structural features included quantitative metrics from 113 cortical and non-cortical regions. Functional features included Pearson correlation functional connectivity matrices, nodal and global graph theoretical measures, nodal power spectra, voxelwise global connectivity, and voxelwise regional homogeneity. We performed feature ranking for each site and modality using the multiple support vector machine recursive feature elimination (SVM-RFE) algorithm, and feature subset selection by optimizing the expected generalization performance of a radial basis function kernel SVM (RBF-SVM) trained across a range of the top features. Site-specific RBF-SVMs using these optimal feature sets from each imaging modality were used to predict the class labels of an independent hold-out test set. A voting approach was used to combine these multiple predictions and assign final class labels. With this methodology we were able to predict diagnosis of ADHD with 55% accuracy (versus a 39% chance level in this sample), 33% sensitivity, and 80% specificity. This approach also allowed us to evaluate predictive structural and functional features giving insight into abnormal brain circuitry in ADHD. PMID:22912605
Patch-based automatic retinal vessel segmentation in global and local structural context.
Cao, Shuoying; Bharath, Anil A; Parker, Kim H; Ng, Jeffrey
2012-01-01
In this paper, we extend our published work [1] and propose an automated system to segment retinal vessel bed in digital fundus images with enough adaptability to analyze images from fluorescein angiography. This approach takes into account both the global and local context and enables both vessel segmentation and microvascular centreline extraction. These tools should allow researchers and clinicians to estimate and assess vessel diameter, capillary blood volume and microvascular topology for early stage disease detection, monitoring and treatment. Global vessel bed segmentation is achieved by combining phase-invariant orientation fields with neighbourhood pixel intensities in a patch-based feature vector for supervised learning. This approach is evaluated against benchmarks on the DRIVE database [2]. Local microvascular centrelines within Regions-of-Interest (ROIs) are segmented by linking the phase-invariant orientation measures with phase-selective local structure features. Our global and local structural segmentation can be used to assess both pathological structural alterations and microemboli occurrence in non-invasive clinical settings in a longitudinal study.
Multiview Locally Linear Embedding for Effective Medical Image Retrieval
Shen, Hualei; Tao, Dacheng; Ma, Dianfu
2013-01-01
Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277
NASA Technical Reports Server (NTRS)
Garvin, J. B.; Sakimoto, S. E. H.; Schnetzler, C.; Frawley, J. J.
1999-01-01
Impact craters on Mars have been used to provide fundamental insights into the properties of the martian crust, the role of volatiles, the relative age of the surface, and on the physics of impact cratering in the Solar System. Before the three-dimensional information provided by the Mars Orbiter Laser Altimeter (MOLA) instrument which is currently operating in Mars orbit aboard the Mars Global Surveyor (MGS), impact features were characterized morphologically using orbital images from Mariner 9 and Viking. Fresh-appearing craters were identified and measurements of their geometric properties were derived from various image-based methods. MOLA measurements can now provide a global sample of topographic cross-sections of martian impact features as small as approx. 2 km in diameter, to basin-scale features. We have previously examined MOLA cross-sections of Northern Hemisphere and North Polar Region impact features, but were unable to consider the global characteristics of these ubiquitous landforms. Here we present our preliminary assessment of the geometric properties of a globally-distributed sample of martian impact craters, most of which were sampled during the initial stages of the MGS mapping mission (i.e., the first 600 orbits). Our aim is to develop a framework for reconsidering theories concerning impact cratering in the martian environment. This first global analysis is focused upon topographically-fresh impact craters, defined here on the basis of MOLA topographic profiles that cross the central cavities of craters that can be observed in Viking-based MDIM global image mosaics. We have considered crater depths, rim heights, ejecta topologies, cross-sectional "shapes", and simple physical models for ejecta emplacement. To date (May, 1999), we have measured the geometric properties of over 1300 impact craters in the 2 to 350 km diameter size interval. A large fraction of these measured craters were sampled with cavity-center cross-sections during the first two months of MGS mapping. Many of these craters are included in Nadine Barlow's Catalogue of Martian Impact Craters, although we have treated simple craters smaller than about 7 km in greater detail than all previous investigations. Additional information is contained in the original extended abstract.
NASA Astrophysics Data System (ADS)
Rogers, L. D.; Valderrama Graff, P.; Bandfield, J. L.; Christensen, P. R.; Klug, S. L.; Deva, B.; Capages, C.
2007-12-01
The Mars Public Mapping Project is a web-based education and public outreach tool developed by the Mars Space Flight Facility at Arizona State University. This tool allows the general public to identify and map geologic features on Mars, utilizing Thermal Emission Imaging System (THEMIS) visible images, allowing public participation in authentic scientific research. In addition, participants are able to rate each image (based on a 1 to 5 star scale) to help build a catalog of some of the more appealing and interesting martian surface features. Once participants have identified observable features in an image, they are able to view a map of the global distribution of the many geologic features they just identified. This automatic feedback, through a global distribution map, allows participants to see how their answers compare to the answers of other participants. Participants check boxes "yes, no, or not sure" for each feature that is listed on the Mars Public Mapping Project web page, including surface geologic features such as gullies, sand dunes, dust devil tracks, wind streaks, lava flows, several types of craters, and layers. Each type of feature has a quick and easily accessible description and example image. When a participant moves their mouse over each example thumbnail image, a window pops up with a picture and a description of the feature. This provides a form of "on the job training" for the participants that can vary with their background level. For users who are more comfortable with Mars geology, there is also an advanced feature identification section accessible by a drop down menu. This includes additional features that may be identified, such as streamlined islands, valley networks, chaotic terrain, yardangs, and dark slope streaks. The Mars Public Mapping Project achieves several goals: 1) It engages the public in a manner that encourages active participation in scientific research and learning about geologic features and processes. 2) It helps to build a mappable database that can be used by researchers (and the public in general) to quickly access image based data that contains particular feature types. 3) It builds a searchable database of images containing specific geologic features that the public deem to be visually appealing. Other education and public outreach programs at the Mars Space Flight Facility, such as the Rock Around the World and the Mars Student Imaging Project, have shown an increase in demand for programs that allow "kids of all ages" to participate in authentic scientific research. The Mars Public Mapping Project is a broadly accessible program that continues this theme by building a set of activities that is useful for both the public and scientists.
Image fusion algorithm based on energy of Laplacian and PCNN
NASA Astrophysics Data System (ADS)
Li, Meili; Wang, Hongmei; Li, Yanjun; Zhang, Ke
2009-12-01
Owing to the global coupling and pulse synchronization characteristic of pulse coupled neural networks (PCNN), it has been proved to be suitable for image processing and successfully employed in image fusion. However, in almost all the literatures of image processing about PCNN, linking strength of each neuron is assigned the same value which is chosen by experiments. This is not consistent with the human vision system in which the responses to the region with notable features are stronger than that to the region with nonnotable features. It is more reasonable that notable features, rather than the same value, are employed to linking strength of each neuron. As notable feature, energy of Laplacian (EOL) is used to obtain the value of linking strength in PCNN in this paper. Experimental results demonstrate that the proposed algorithm outperforms Laplacian-based, wavelet-based, PCNN -based fusion algorithms.
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
2017-07-14
On July 14, 2015, NASA's New Horizons spacecraft made its historic flight through the Pluto system. This detailed, high-quality global mosaic of Pluto was assembled from nearly all of the highest-resolution images obtained by the Long-Range Reconnaissance Imager (LORRI) and the Multispectral Visible Imaging Camera (MVIC) on New Horizons. The mosaic is the most detailed and comprehensive global view yet of Pluto's surface using New Horizons data. It includes topography data of the hemisphere visible to New Horizons during the spacecraft's closest approach. The topography is derived from digital stereo-image mapping tools that measure the parallax -- or the difference in the apparent relative positions -- of features on the surface obtained at different viewing angles during the encounter. Scientists use these parallax displacements of high and low terrain to estimate landform heights. The global mosaic has been overlain with transparent, colorized topography data wherever on the surface stereo data is available. Terrain south of about 30°S was in darkness leading up to and during the flyby, so is shown in black. Examples of large-scale topographic features on Pluto include the vast expanse of very flat, low-elevation nitrogen ice plains of Sputnik Planitia ("P") -- note that all feature names in the Pluto system are informal -- and, on the eastern edge of the encounter hemisphere, the aligned, high-elevation ridges of Tartarus Dorsa ("T") that host the enigmatic bladed terrain, mountains, possible cryovolcanos, canyons, craters and more. https://photojournal.jpl.nasa.gov/catalog/PIA21861
Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.
NASA Astrophysics Data System (ADS)
Liu, Chunhui; Zhang, Duona; Zhao, Xintao
2018-03-01
Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.
A flower image retrieval method based on ROI feature.
Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan
2004-07-01
Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).
Face aging effect simulation model based on multilayer representation and shearlet transform
NASA Astrophysics Data System (ADS)
Li, Yuancheng; Li, Yan
2017-09-01
In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.
NASA Technical Reports Server (NTRS)
2005-01-01
9 April 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows patterned ground on the martian northern plains. The circular features are buried meteor impact craters; the small dark dots associated with them are boulders. The dark feature at left center is a wind streak. Location near: 75.1oN, 303.0oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern SummerNeighborhood Structural Similarity Mapping for the Classification of Masses in Mammograms.
Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree
2018-05-01
In this paper, two novel feature extraction methods, using neighborhood structural similarity (NSS), are proposed for the characterization of mammographic masses as benign or malignant. Since gray-level distribution of pixels is different in benign and malignant masses, more regular and homogeneous patterns are visible in benign masses compared to malignant masses; the proposed method exploits the similarity between neighboring regions of masses by designing two new features, namely, NSS-I and NSS-II, which capture global similarity at different scales. Complementary to these global features, uniform local binary patterns are computed to enhance the classification efficiency by combining with the proposed features. The performance of the features are evaluated using the images from the mini-mammographic image analysis society (mini-MIAS) and digital database for screening mammography (DDSM) databases, where a tenfold cross-validation technique is incorporated with Fisher linear discriminant analysis, after selecting the optimal set of features using stepwise logistic regression method. The best area under the receiver operating characteristic curve of 0.98 with an accuracy of is achieved with the mini-MIAS database, while the same for the DDSM database is 0.93 with accuracy .
Remote consultation and diagnosis in medical imaging using a global PACS backbone network
NASA Astrophysics Data System (ADS)
Martinez, Ralph; Sutaria, Bijal N.; Kim, Jinman; Nam, Jiseung
1993-10-01
A Global PACS is a national network which interconnects several PACS networks at medical and hospital complexes using a national backbone network. A Global PACS environment enables new and beneficial operations between radiologists and physicians, when they are located in different geographical locations. One operation allows the radiologist to view the same image folder at both Local and Remote sites so that a diagnosis can be performed. The paper describes the user interface, database management, and network communication software which has been developed in the Computer Engineering Research Laboratory and Radiology Research Laboratory. Specifically, a design for a file management system in a distributed environment is presented. In the remote consultation and diagnosis operation, a set of images is requested from the database archive system and sent to the Local and Remote workstation sites on the Global PACS network. Viewing the same images, the radiologists use pointing overlay commands, or frames to point out features on the images. Each workstation transfers these frames, to the other workstation, so that an interactive session for diagnosis takes place. In this phase, we use fixed frames and variable size frames, used to outline an object. The data pockets for these frames traverses the national backbone in real-time. We accomplish this feature by using TCP/IP protocol sockets for communications. The remote consultation and diagnosis operation has been tested in real-time between the University Medical Center and the Bowman Gray School of Medicine at Wake Forest University, over the Internet. In this paper, we show the feasibility of the operation in a Global PACS environment. Future improvements to the system will include real-time voice and interactive compressed video scenarios.
Mapping Io's Surface Topography Using Voyager and Galileo Stereo Images and Photoclinometry
NASA Astrophysics Data System (ADS)
White, O. L.; Schenk, P.
2011-12-01
O.L. White and P.M. Schenk Lunar and Planetary Institute, 3600 Bay Area Boulevard, Houston, Texas, 77058 No instrumentation specifically designed to measure the topography of a planetary surface has ever been deployed to any of the Galilean satellites. Available methods that exist to perform such a task in the absence of the relevant instrumentation include photoclinometry, shadow length measurement, and stereo imaging. Stereo imaging is generally the most accurate of these methods, but is subject to limitations. Io is a challenging subject for stereo imaging given that much of its surface is comprised of volcanic plains, smooth at the resolution of many of the available global images. Radiation noise in Galileo images can also complicate mapping. Paterae, mountains and a few tall shield volcanoes, the only features of any considerable relief, exist as isolated features within these plains; previous research concerning topography measurement on Io using stereo imaging has focused on these features, and has been localized in its scope [Schenk et al., 2001; Schenk et al., 2004]. With customized ISIS software developed at LPI, it is the ultimate intention of our research to use stereo and photoclinometry processing of Voyager and Galileo images to create a global topographic map of Io that will constrain the shapes of local- and regional-scale features on this volcanic moon, and which will be tied to the global shape model of Thomas et al. [1998]. Applications of these data include investigation of how global heat flow varies across the moon and its relation to mantle convection and tidal heating [Tackley et al., 2001], as well as its correlation with local geology. Initial stereo mapping has focused on the Ra Patera/Euboea Montes/Acala Fluctus area, while initial photoclinometry mapping has focused on several paterae and calderas across Io. The results of both stereo and photoclinometry mapping have indicated that distinct topographic areas may correlate with surface geology. To date we have obtained diameter and depth measurements for ten calderas using these DEMs, and we look forward to studying regional and latitudinal variation in caldera depth. References Schenk, P.M., et al. (2001) J. Geophys. Res., 106, pp. 33,201-33,222. Schenk, P.M., et al. (2004) Icarus, 169, pp. 98-110. Tackley, P.J., et al. (2001) Icarus, 149, pp. 79-93. Thomas, P., et al. (1998) Icarus, 135, pp. 175-180. The authors acknowledge the support of the NASA Outer Planet Research and the Planetary Geology and Geophysics research programs.
The power of Kawaii: viewing cute images promotes a careful behavior and narrows attentional focus.
Nittono, Hiroshi; Fukushima, Michiko; Yano, Akihiro; Moriya, Hiroki
2012-01-01
Kawaii (a Japanese word meaning "cute") things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE=43.9 ± 10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9 ± 5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7 ± 2.2% improvement) than after viewing less cute images (1.4 ± 2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2 ± 2.1%). In the third experiment, participants performed a global-local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mengel, S.K.; Morrison, D.B.
1985-01-01
Consideration is given to global biogeochemical issues, image processing, remote sensing of tropical environments, global processes, geology, landcover hydrology, and ecosystems modeling. Topics discussed include multisensor remote sensing strategies, geographic information systems, radars, and agricultural remote sensing. Papers are presented on fast feature extraction; a computational approach for adjusting TM imagery terrain distortions; the segmentation of a textured image by a maximum likelihood classifier; analysis of MSS Landsat data; sun angle and background effects on spectral response of simulated forest canopies; an integrated approach for vegetation/landcover mapping with digital Landsat images; geological and geomorphological studies using an image processing technique;more » and wavelength intensity indices in relation to tree conditions and leaf-nutrient content.« less
Implicit integration in a case of integrative visual agnosia.
Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo
2007-05-15
We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.
Crack image segmentation based on improved DBC method
NASA Astrophysics Data System (ADS)
Cao, Ting; Yang, Nan; Wang, Fengping; Gao, Ting; Wang, Weixing
2017-11-01
With the development of computer vision technology, crack detection based on digital image segmentation method arouses global attentions among researchers and transportation ministries. Since the crack always exhibits the random shape and complex texture, it is still a challenge to accomplish reliable crack detection results. Therefore, a novel crack image segmentation method based on fractal DBC (differential box counting) is introduced in this paper. The proposed method can estimate every pixel fractal feature based on neighborhood information which can consider the contribution from all possible direction in the related block. The block moves just one pixel every time so that it could cover all the pixels in the crack image. Unlike the classic DBC method which only describes fractal feature for the related region, this novel method can effectively achieve crack image segmentation according to the fractal feature of every pixel. The experiment proves the proposed method can achieve satisfactory results in crack detection.
Mars Odyssey from Two Distances in One Image
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1: Why There are Two Images of Odyssey NASA's Mars Odyssey spacecraft appears twice in the same frame in this image from the Mars Orbiter Camera aboard NASA's Mars Global Surveyor. The camera's successful imaging of Odyssey and of the European Space Agency's Mars Express in April 2005 produced the first pictures of any spacecraft orbiting Mars taken by another spacecraft orbiting Mars. Mars Global Surveyor and Mars Odyssey are both in nearly circular, near-polar orbits. Odyssey is in an orbit slightly higher than that of Global Surveyor in order to preclude the possibility of a collision. However, the two spacecraft occasionally come as close together as 15 kilometers (9 miles). The images were obtained by the Mars Global Surveyor operations teams at Lockheed Martin Space System, Denver; JPL and Malin Space Science Systems. The two views of Mars Odyssey in this image were acquired a little under 7.5 seconds apart as Odyssey receded from a close flyby of Mars Global Surveyor. The geometry of the flyby (see Figure 1) and the camera's way of acquiring an image line-by-line resulted in the two views of Odyssey in the same frame. The first view (right) was taken when Odyssey was about 90 kilometers (56 miles) from Global Surveyor and moving more rapidly than Global Surveyor was rotating, as seen from Global Surveyor. A few seconds later, Odyssey was farther away -- about 135 kilometers (84 miles) -- and appeared to be moving more slowly. In this second view of Odyssey (left), the Mars Orbiter Camera's field-of-view overtook Odyssey. The Mars Orbiter Camera can resolve features on the surface of Mars as small as a few meters or yards across from Mars Global Surveyor's orbital altitude of 350 to 405 kilometers (217 to 252 miles). From a distance of 100 kilometers (62 miles), the camera would be able to resolve features substantially smaller than 1 meter or yard across. Mars Odyssey was launched on April 7, 2001, and reached Mars on Oct. 24, 2001. Mars Global Surveyor left Earth on Nov. 7, 1996, and arrived in Mars orbit on Sept. 12, 1997. Both orbiters are in an extended mission phase, both have relayed data from the Mars Exploration Rovers, and both are continuing to return exciting new results from Mars. JPL, a division of the California Institute of Technology, Pasadena, manages both missions for NASA's Science Mission Directorate, Washington, D.C.Large Impact Features on Saturn's Middle-sized Icy Satellites: Global Image Mosaics and Topography
NASA Astrophysics Data System (ADS)
Schenk, P. M.; Moore, J. M.; McKinnon, W. B.
2003-03-01
New topographic maps of Saturn's middle-sized icy satellites derived from stereo imaging and 2D photoclinometry provide a sneak peak at the surprises in store when Cassini arrives at Saturn. We reexamine the morphology of large impact craters and describe their relaxation state.
On the analysis of local and global features for hyperemia grading
NASA Astrophysics Data System (ADS)
Sánchez, L.; Barreira, N.; Sánchez, N.; Mosquera, A.; Pena-Verdeal, H.; Yebra-Pimentel, E.
2017-03-01
In optometry, hyperemia is the accumulation of blood flow in the conjunctival tissue. Dry eye syndrome or allergic conjunctivitis are two of its main causes. Its main symptom is the presence of a red hue in the eye that optometrists evaluate according to a scale in a subjective manner. In this paper, we propose an automatic approach to the problem of hyperemia grading in the bulbar conjunctiva. We compute several image features on images of the patients' eyes, analyse the relations among them by using feature selection techniques and transform the feature vector of each image to the value in the adequate range by means of machine learning techniques. We analyse different areas of the conjunctiva to evaluate their importance for the diagnosis. Our results show that it is possible to mimic the experts' behaviour through the proposed approach.
New features in Saturn's atmosphere revealed by high-resolution thermal infrared images
NASA Technical Reports Server (NTRS)
Gezari, D. Y.; Mumma, M. J.; Espenak, F.; Deming, D.; Bjoraker, G.; Woods, L.; Folz, W.
1989-01-01
Observations of the stratospheric IR emission structure on Saturn are presented. The high-spatial-resolution global images show a variety of new features, including a narrow equatorial belt of enhanced emission at 7.8 micron, a prominent symmetrical north polar hotspot at all three wavelengths, and a midlatitude structure which is asymmetrically brightened at the east limb. The results confirm the polar brightening and reversal in position predicted by recent models for seasonal thermal variations of Saturn's stratosphere.
NASA Technical Reports Server (NTRS)
2005-01-01
16 May 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows cross-cutting fault scarps among graben features in northern Tempe Terra. Graben form in regions where the crust of the planet has been extended; such features are common in the regions surrounding the vast 'Tharsis Bulge' on Mars. Location near: 43.7oN, 90.2oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern SummerRegion of interest extraction based on multiscale visual saliency analysis for remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan
2015-01-01
Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.
Mars Global Surveyor Approach Image
1997-07-04
This image is the first view of Mars taken by the Mars Global Surveyor Orbiter Camera (MOC). It was acquired the afternoon of July 2, 1997 when the MGS spacecraft was 17.2 million kilometers (10.7 million miles) and 72 days from encounter. At this distance, the MOC's resolution is about 64 km per picture element, and the 6800 km (4200 mile) diameter planet is 105 pixels across. The observation was designed to show the Mars Pathfinder landing site at 19.4 N, 33.1 W approximately 48 hours prior to landing. The image shows the north polar cap of Mars at the top of the image, the dark feature Acidalia Planitia in the center with the brighter Chryse plain immediately beneath it, and the highland areas along the Martian equator including the canyons of the Valles Marineris (which are bright in this image owing to atmospheric dust). The dark features Terra Meridiani and Terra Sabaea can be seen at the 4 o`clock position, and the south polar hood (atmospheric fog and hazes) can be seen at the bottom of the image. Launched on November 7, 1996, Mars Global Surveyor will enter Mars orbit on Thursday, September 11 shortly after 6:00 PM PDT. After Mars Orbit Insertion, the spacecraft will use atmospheric drag to reduce the size of its orbit, achieving a circular orbit only 400 km (248 mi) above the surface in early March 1998, when mapping operations will begin. http://photojournal.jpl.nasa.gov/catalog/PIA00606
Retinal slit lamp video mosaicking.
De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael
2016-06-01
To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.
Automated Image Registration Using Morphological Region of Interest Feature Extraction
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2005-01-01
With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.
Semantic and topological classification of images in magnetically guided capsule endoscopy
NASA Astrophysics Data System (ADS)
Mewes, P. W.; Rennert, P.; Juloski, A. L.; Lalande, A.; Angelopoulou, E.; Kuth, R.; Hornegger, J.
2012-03-01
Magnetically-guided capsule endoscopy (MGCE) is a nascent technology with the goal to allow the steering of a capsule endoscope inside a water filled stomach through an external magnetic field. We developed a classification cascade for MGCE images with groups images in semantic and topological categories. Results can be used in a post-procedure review or as a starting point for algorithms classifying pathologies. The first semantic classification step discards over-/under-exposed images as well as images with a large amount of debris. The second topological classification step groups images with respect to their position in the upper gastrointestinal tract (mouth, esophagus, stomach, duodenum). In the third stage two parallel classifications steps distinguish topologically different regions inside the stomach (cardia, fundus, pylorus, antrum, peristaltic view). For image classification, global image features and local texture features were applied and their performance was evaluated. We show that the third classification step can be improved by a bubble and debris segmentation because it limits feature extraction to discriminative areas only. We also investigated the impact of segmenting intestinal folds on the identification of different semantic camera positions. The results of classifications with a support-vector-machine show the significance of color histogram features for the classification of corrupted images (97%). Features extracted from intestinal fold segmentation lead only to a minor improvement (3%) in discriminating different camera positions.
Uranus' Persistent Patterns and Features from High-SNR Imaging in 2012-2014
NASA Astrophysics Data System (ADS)
Fry, Patrick M.; Sromovsky, Lawrence A.; de Pater, Imke; Hammel, Heidi B.; Marcus, Phillip
2015-11-01
Since 2012, Uranus has been the subject of an observing campaign utilizing high signal-to-noise imaging techniques at Keck Observatory (Fry et al. 2012, Astron. J. 143, 150-161). High quality observing conditions on four observing runs of consecutive nights allowed longitudinally-complete coverage of the atmosphere over a period of two years (Sromovsky et al. 2015, Icarus 258, 192-223). Global mosaic maps made from images acquired on successive nights in August 2012, November 2012, August 2013, and August 2014, show persistent patterns, and six easily distinguished long-lived cloud features, which we were able to track for long periods that ranged from 5 months to over two years. Two at similar latitudes are associated with dark spots, and move with the atmospheric zonal flow close to the location of their associated dark spot instead of following the flow at the latitude of the bright features. These features retained their morphologies and drift rates in spite of several close interactions. A second pair of features at similar latitudes also survived several close approaches. Several of the long-lived features also exhibited equatorward drifts and latitudinal oscillations. Also persistent are a remarkable near-equatorial wave feature and global zonal band structure. We will present imagery, maps, and analyses of these phenomena.PMF and LAS acknowledge support from NASA Planetary Astronomy Program; PMF and LAS acknowledge funding and technical support from W. M. Keck Observatory. We thank those of Hawaiian ancestry on whose sacred mountain we are privileged to be guests. Without their generous hospitality none of our groundbased observations would have been possible.
2017-07-14
On July 14, 2015, NASA's New Horizons spacecraft made its historic flight through the Pluto system. This detailed, high-quality global mosaic of Pluto's largest moon, Charon, was assembled from nearly all of the highest-resolution images obtained by the Long-Range Reconnaissance Imager (LORRI) and the Multispectral Visible Imaging Camera (MVIC) on New Horizons. The mosaic is the most detailed and comprehensive global view yet of Charon's surface using New Horizons data. It includes topography data of the hemisphere visible to New Horizons during the spacecraft's closest approach. The topography is derived from digital stereo-image mapping tools that measure the parallax -- or the difference in the apparent relative positions -- of features on the surface obtained at different viewing angles during the encounter. Scientists use these parallax displacements of high and low terrain to estimate landform heights. The global mosaic has been overlain with transparent, colorized topography data wherever on the surface stereo data is available. Terrain south of about 30°S was in darkness leading up to and during the flyby, so is shown in black. All feature names on Pluto and Charon are informal. The global mosaic has been overlain with transparent, colorized topography data wherever on their surfaces stereo data is available. Standing out on Charon is the Caleuche Chasma ("C") in the far north, an enormous trough at least 350 kilometers (nearly 220 miles) long, and reaching 14 kilometers (8.5 miles) deep -- more than seven times as deep as the Grand Canyon. https://photojournal.jpl.nasa.gov/catalog/PIA21860
NASA Astrophysics Data System (ADS)
Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric
2011-03-01
Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.
Modeling global scene factors in attention
NASA Astrophysics Data System (ADS)
Torralba, Antonio
2003-07-01
Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America
Fast large-scale object retrieval with binary quantization
NASA Astrophysics Data System (ADS)
Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi
2015-11-01
The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.
Feature selection from a facial image for distinction of sasang constitution.
Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho
2009-09-01
Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.
Feature Selection from a Facial Image for Distinction of Sasang Constitution
Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun
2009-01-01
Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013
NASA Astrophysics Data System (ADS)
Beebe, R. F.; Ingersoll, A. P.; Hunt, G. E.; Mitchell, J. L.; Muller, J.-P.
1980-01-01
Voyager 1 narrow-angle images were used to obtain displacements of features down to 100 to 200 km in size over intervals of 10 hours. A global map of velocity vectors and longitudinally averaged zonal wind vectors as functions of the latitude, is presented and discussed
Reconstruction of biofilm images: combining local and global structural parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk
2014-10-20
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parametersmore » into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.« less
Feature-Based Morphometry: Discovering Group-related Anatomical Patterns
Toews, Matthew; Wells, William; Collins, D. Louis; Arbel, Tal
2015-01-01
This paper presents feature-based morphometry (FBM), a new, fully data-driven technique for discovering patterns of group-related anatomical structure in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between subjects, FBM explicitly aims to identify distinctive anatomical patterns that may only be present in subsets of subjects, due to disease or anatomical variability. The image is modeled as a collage of generic, localized image features that need not be present in all subjects. Scale-space theory is applied to analyze image features at the characteristic scale of underlying anatomical structures, instead of at arbitrary scales such as global or voxel-level. A probabilistic model describes features in terms of their appearance, geometry, and relationship to subject groups, and is automatically learned from a set of subject images and group labels. Features resulting from learning correspond to group-related anatomical structures that can potentially be used as image biomarkers of disease or as a basis for computer-aided diagnosis. The relationship between features and groups is quantified by the likelihood of feature occurrence within a specific group vs. the rest of the population, and feature significance is quantified in terms of the false discovery rate. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer's (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and an equal error classification rate of 0.80 is achieved for subjects aged 60-80 years exhibiting mild AD (CDR=1). PMID:19853047
NASA Astrophysics Data System (ADS)
Deng, S.; Katoh, M.; Takenaka, Y.; Cheung, K.; Ishii, A.; Fujii, N.; Gao, T.
2017-10-01
This study attempted to classify three coniferous and ten broadleaved tree species by combining airborne laser scanning (ALS) data and multispectral images. The study area, located in Nagano, central Japan, is within the broadleaved forests of the Afan Woodland area. A total of 235 trees were surveyed in 2016, and we recorded the species, DBH, and tree height. The geographical position of each tree was collected using a Global Navigation Satellite System (GNSS) device. Tree crowns were manually detected using GNSS position data, field photographs, true-color orthoimages with three bands (red-green-blue, RGB), 3D point clouds, and a canopy height model derived from ALS data. Then a total of 69 features, including 27 image-based and 42 point-based features, were extracted from the RGB images and the ALS data to classify tree species. Finally, the detected tree crowns were classified into two classes for the first level (coniferous and broadleaved trees), four classes for the second level (Pinus densiflora, Larix kaempferi, Cryptomeria japonica, and broadleaved trees), and 13 classes for the third level (three coniferous and ten broadleaved species), using the 27 image-based features, 42 point-based features, all 69 features, and the best combination of features identified using a neighborhood component analysis algorithm, respectively. The overall classification accuracies reached 90 % at the first and second levels but less than 60 % at the third level. The classifications using the best combinations of features had higher accuracies than those using the image-based and point-based features and the combination of all of the 69 features.
A new CAD approach for improving efficacy of cancer screening
NASA Astrophysics Data System (ADS)
Zheng, Bin; Qian, Wei; Li, Lihua; Pu, Jiantao; Kang, Yan; Lure, Fleming; Tan, Maxine; Qiu, Yuchen
2015-03-01
Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists' decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.
Sousa, Daniel; Small, Christopher
2018-02-14
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area - despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system.
Small, Christopher
2018-01-01
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area – despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system. PMID:29443900
The Importance of Chaos and Lenticulae on Europa for the JIMO Mission
NASA Technical Reports Server (NTRS)
Spaun, Nicole A.
2003-01-01
The Galileo Solid State Imaging (SSI) experiment provided high-resolution images of Europa's surface allowing identification of surface features barely distinguishable at Voyager's resolution. SSI revealed the visible pitting on Europa's surface to be due to large disrupted features, chaos, and smaller sub-circular patches, lenticulae. Chaos features contain a hummocky matrix material and commonly contain dislocated blocks of ridged plains. Lenticulae are morphologically interrelated and can be divided into three classes: domes, spots, and micro-chaos. Domes are broad, upwarped features that generally do not disrupt the texture of the ridged plains. Spots are areas of low albedo that are generally smooth in texture compared to other units. Micro-chaos are disrupted features with a hummocky matrix material, resembling that observed within chaos regions. Chaos and lenticulae are ubiquitous in the SSI regional map observations, which average approximately 200 meters per pixel (m/pxl) in resolution, and appear in several of the ultra-high resolution, i.e., better than 50 m/pxl, images of Europa as well. SSI also provided a number of multi-spectral observations of chaos and lenticulae. Using this dataset we have undertaken a thorough study of the morphology, size, spacing, stratigraphy, and color of chaos and lenticulae to determine their properties and evaluate models of their formation. Geological mapping indicates that chaos and micro-chaos have a similar internal morphology of in-situ degradation suggesting that a similar process was operating during their formation. The size distribution denotes a dominant size of 4-8 km in diameter for features containing hummocky material (i.e., chaos and micro-chaos). Results indicate a dominant spacing of 15 - 36 km apart. Chaos and lenticulae are generally among the youngest features stratigraphically observed on the surface, suggesting a recent change in resurfacing style. Also, the reddish non-icy materials on Europa's surface have high concentrations in many chaos and lenticulae features. Nonetheless, a complete global map of the distribution of chaos and lenticulae is not possible with the SSI dataset. Only <20% of the surface has been imaged at 200 m/pxl or better resolution, mostly of the near-equatorial regions. Color and ultra-high-res images have much less surface coverage. Thus we suggest that full global imaging of Europa at 200 m/pxl or better resolution, preferably in multi-spectral wavelengths, should be a high priority for the JIMO mission.
NASA Technical Reports Server (NTRS)
2005-01-01
14 August 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a circular depression and a suite of eroding mesas of carbon dioxide. These features occur in the south polar residual cap of Mars. The eroding carbon dioxide creates landforms reminiscent of 'Swiss cheese.' The circular feature might indicate the location of a filled, buried impact crater. Location near: 86.8oS, 111.0oW Image width: width: 3 km (1.9 mi) Illumination from: upper left Season: Southern SpringFrom Pluto Mountains to Its Plains
2015-09-24
Images of Pluto taken by NASA New Horizons spacecraft before closest approach on July 14, 2015, reveal features as small as 270 yards (250 meters) across, from craters to faulted mountain blocks, to the textured surface of the vast basin informally called Sputnik Planum. Enhanced color has been added from the global color image. This image is about 330 miles (530 kilometers) across. http://photojournal.jpl.nasa.gov/catalog/PIA19955
MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification
NASA Astrophysics Data System (ADS)
Lin, Daoyu; Fu, Kun; Wang, Yang; Xu, Guangluan; Sun, Xian
2017-11-01
With the development of deep learning, supervised learning has frequently been adopted to classify remotely sensed images using convolutional networks (CNNs). However, due to the limited amount of labeled data available, supervised learning is often difficult to carry out. Therefore, we proposed an unsupervised model called multiple-layer feature-matching generative adversarial networks (MARTA GANs) to learn a representation using only unlabeled data. MARTA GANs consists of both a generative model $G$ and a discriminative model $D$. We treat $D$ as a feature extractor. To fit the complex properties of remote sensing data, we use a fusion layer to merge the mid-level and global features. $G$ can produce numerous images that are similar to the training data; therefore, $D$ can learn better representations of remotely sensed images using the training data provided by $G$. The classification results on two widely used remote sensing image databases show that the proposed method significantly improves the classification performance compared with other state-of-the-art methods.
Surveying the Newly Digitized Apollo Metric Images for Highland Fault Scarps on the Moon
NASA Astrophysics Data System (ADS)
Williams, N. R.; Pritchard, M. E.; Bell, J. F.; Watters, T. R.; Robinson, M. S.; Lawrence, S.
2009-12-01
The presence and distribution of thrust faults on the Moon have major implications for lunar formation and thermal evolution. For example, thermal history models for the Moon imply that most of the lunar interior was initially hot. As the Moon cooled over time, some models predict global-scale thrust faults should form as stress builds from global thermal contraction. Large-scale thrust fault scarps with lengths of hundreds of kilometers and maximum relief of up to a kilometer or more, like those on Mercury, are not found on the Moon; however, relatively small-scale linear and curvilinear lobate scarps with maximum lengths typically around 10 km have been observed in the highlands [Binder and Gunga, Icarus, v63, 1985]. These small-scale scarps are interpreted to be thrust faults formed by contractional stresses with relatively small maximum (tens of meters) displacements on the faults. These narrow, low relief landforms could only be identified in the highest resolution Lunar Orbiter and Apollo Panoramic Camera images and under the most favorable lighting conditions. To date, the global distribution and other properties of lunar lobate faults are not well understood. The recent micron-resolution scanning and digitization of the Apollo Mapping Camera (Metric) photographic negatives [Lawrence et al., NLSI Conf. #1415, 2008; http://wms.lroc.asu.edu/apollo] provides a new dataset to search for potential scarps. We examined more than 100 digitized Metric Camera image scans, and from these identified 81 images with favorable lighting (incidence angles between about 55 and 80 deg.) to manually search for features that could be potential tectonic scarps. Previous surveys based on Panoramic Camera and Lunar Orbiter images found fewer than 100 lobate scarps in the highlands; in our Apollo Metric Camera image survey, we have found additional regions with one or more previously unidentified linear and curvilinear features on the lunar surface that may represent lobate thrust fault scarps. In this presentation we review the geologic characteristics and context of these newly-identified, potentially tectonic landforms. The lengths and relief of some of these linear and curvilinear features are consistent with previously identified lobate scarps. Most of these features are in the highlands, though a few occur along the edges of mare and/or crater ejecta deposits. In many cases the resolution of the Metric Camera frames (~10 m/pix) is not adequate to unequivocally determine the origin of these features. Thus, to assess if the newly identified features have tectonic or other origins, we are examining them in higher-resolution Panoramic Camera (currently being scanned) and Lunar Reconnaissance Orbiter Camera Narrow Angle Camera images [Watters et al., this meeting, 2009].
Doppler Imaging of Exoplanets and Brown Dwarfs
NASA Astrophysics Data System (ADS)
Crossfield, I.; Biller, B.; Schlieder, J.; Deacon, N.; Bonnefoy, M.; Homeier, D.; Allard, F.; Buenzli, E.; Henning, T.; Brandner, W.; Goldman, Bertr; Kopytova, T.
2014-03-01
Doppler Imaging produces 2D global maps. When applied to cool planets or more massive brown dwarfs, it can map atmospheric features and track global weather patterns. The first substellar map, of the 2pc-distant brown dwarf Luhman 16B (Crossfeld et al. 2014), revealed patchy regions of thin & thick clouds. Here, I investigate the feasibility of future Doppler Imaging of additional objects. Searching the literature, I find that all 3 of P, v sin i, and variability are published for 22 brown dwarfs. At least one datum exists for 333 targets. The sample is very incomplete below ~L5; we need more surveys to find the best targets for Doppler Imaging! I estimate limiting magnitudes for Doppler Imaging with various hi-resolution near-infrared spectrographs. Only a handful of objects - at the M/L and L/T transitions - can be mapped with current tools. Large telescopes such as TMT and GMT will allow Doppler Imaging of many dozens of brown dwarfs and the brightest exoplanets. More targets beyond type L5 likely remain to be found. Future observations will let us probe the global atmospheric dynamics of many diverse objects.
Salient region detection by fusing bottom-up and top-down features extracted from a single image.
Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng
2014-10-01
Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.
The First Global Geological Map of Mercury
NASA Astrophysics Data System (ADS)
Prockter, L. M.; Head, J. W., III; Byrne, P. K.; Denevi, B. W.; Kinczyk, M. J.; Fassett, C.; Whitten, J. L.; Thomas, R.; Ernst, C. M.
2015-12-01
Geological maps are tools with which to understand the distribution and age relationships of surface geological units and structural features on planetary surfaces. Regional and limited global mapping of Mercury has already yielded valuable science results, elucidating the history and distribution of several types of units and features, such as regional plains, tectonic structures, and pyroclastic deposits. To date, however, no global geological map of Mercury exists, and there is currently no commonly accepted set of standardized unit descriptions and nomenclature. With MESSENGER monochrome image data, we are undertaking the global geological mapping of Mercury at the 1:15M scale applying standard U.S. Geological Survey mapping guidelines. This map will enable the development of the first global stratigraphic column of Mercury, will facilitate comparisons among surface units distributed discontinuously across the planet, and will provide guidelines for mappers so that future mapping efforts will be consistent and broadly interpretable by the scientific community. To date we have incorporated three major datasets into the global geological map: smooth plains units, tectonic structures, and impact craters and basins >20 km in diameter. We have classified most of these craters by relative age on the basis of the state of preservation of morphological features and standard classification schemes first applied to Mercury by the Mariner 10 imaging team. Additional datasets to be incorporated include intercrater plains units and crater ejecta deposits. In some regions MESSENGER color data is used to supplement the monochrome data, to help elucidate different plains units. The final map will be published online, together with a peer-reviewed publication. Further, a digital version of the map, containing individual map layers, will be made publicly available for use within geographic information systems (GISs).
2015-07-25
Four images from NASA's New Horizons' Long Range Reconnaissance Imager (LORRI) were combined with color data from the Ralph instrument to create this global view of Pluto. (The lower right edge of Pluto in this view currently lacks high-resolution color coverage.) The images, taken when the spacecraft was 280,000 miles (450,000 kilometers) away, show features as small as 1.4 miles (2.2 kilometers), twice the resolution of the single-image view taken on July 13. http://photojournal.jpl.nasa.gov/catalog/PIA19857
NASA Technical Reports Server (NTRS)
2004-01-01
9 September 2004 Northeastern Arabia Terra is a heavily eroded portion of the martian cratered highlands. Layered rock, containing filled and buried valleys and ancient impact craters, has been eroded such that these once-buried features are now partially exposed at the martian surface. This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows an example of a field of circular and somewhat circular features that once were impact craters that were subsequently filled, buried, then exhumed to form the patterns exhibited here. The image is located near 25.6oN, 290.2oW. The image covers an area approximately 3 km (1.9 mi) across and is illuminated by sunlight from the lower left.Travel time tomography with local image regularization by sparsity constrained dictionary learning
NASA Astrophysics Data System (ADS)
Bianco, M.; Gerstoft, P.
2017-12-01
We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.
Mobile object retrieval in server-based image databases
NASA Astrophysics Data System (ADS)
Manger, D.; Pagel, F.; Widak, H.
2013-05-01
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.
NASA Technical Reports Server (NTRS)
2004-01-01
This map of the Mars Exploration Rover Opportunity's new neighborhood at Meridiani Planum, Mars, shows the surface features used to locate the rover. By imaging these 'bumps' on the horizon from the perspective of the rover, mission members were able to pin down the rover's precise location. The image consists of data from the Mars Global Surveyor orbiter, the Mars Odyssey orbiter and the descent image motion estimation system located on the bottom of the rover.
Object segmentation controls image reconstruction from natural scenes
2017-01-01
The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism. PMID:28827801
Visualizing Vector Fields Using Line Integral Convolution and Dye Advection
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu
1996-01-01
We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.
Mars Global Surveyor Approach Image
NASA Technical Reports Server (NTRS)
1997-01-01
This image is the first view of Mars taken by the Mars Global Surveyor Orbiter Camera (MOC). It was acquired the afternoon of July 2, 1997 when the MGS spacecraft was 17.2 million kilometers (10.7 million miles) and 72 days from encounter. At this distance, the MOC's resolution is about 64 km per picture element, and the 6800 km (4200 mile) diameter planet is 105 pixels across. The observation was designed to show the Mars Pathfinder landing site at 19.4 N, 33.1 W approximately 48 hours prior to landing. The image shows the north polar cap of Mars at the top of the image, the dark feature Acidalia Planitia in the center with the brighter Chryse plain immediately beneath it, and the highland areas along the Martian equator including the canyons of the Valles Marineris (which are bright in this image owing to atmospheric dust). The dark features Terra Meridiani and Terra Sabaea can be seen at the 4 o`clock position, and the south polar hood (atmospheric fog and hazes) can be seen at the bottom of the image. Launched on November 7, 1996, Mars Global Surveyor will enter Mars orbit on Thursday, September 11 shortly after 6:00 PM PDT. After Mars Orbit Insertion, the spacecraft will use atmospheric drag to reduce the size of its orbit, achieving a circular orbit only 400 km (248 mi) above the surface in early March 1998, when mapping operations will begin.
The Mars Global Surveyor is operated by the Mars Surveyor Operations Project managed for NASA by the Jet Propulsion Laboratory, Pasadena CA. The Mars Orbiter Camera is a duplicate of one of the six instruments originally developed for the Mars Observer mission. It was built and is operated under contract to JPL by an industry/university team led by Malin Space Science Systems, San Diego, CA.Feature-aided multiple target tracking in the image plane
NASA Astrophysics Data System (ADS)
Brown, Andrew P.; Sullivan, Kevin J.; Miller, David J.
2006-05-01
Vast quantities of EO and IR data are collected on airborne platforms (manned and unmanned) and terrestrial platforms (including fixed installations, e.g., at street intersections), and can be exploited to aid in the global war on terrorism. However, intelligent preprocessing is required to enable operator efficiency and to provide commanders with actionable target information. To this end, we have developed an image plane tracker which automatically detects and tracks multiple targets in image sequences using both motion and feature information. The effects of platform and camera motion are compensated via image registration, and a novel change detection algorithm is applied for accurate moving target detection. The contiguous pixel blob on each moving target is segmented for use in target feature extraction and model learning. Feature-based target location measurements are used for tracking through move-stop-move maneuvers, close target spacing, and occlusion. Effective clutter suppression is achieved using joint probabilistic data association (JPDA), and confirmed target tracks are indicated for further processing or operator review. In this paper we describe the algorithms implemented in the image plane tracker and present performance results obtained with video clips from the DARPA VIVID program data collection and from a miniature unmanned aerial vehicle (UAV) flight.
The Tectonics of Mercury: The View from Orbit
NASA Astrophysics Data System (ADS)
Watters, T. R.; Byrne, P. K.; Klimczak, C.; Enns, A. C.; Banks, M. E.; Walsh, L. S.; Ernst, C. M.; Robinson, M. S.; Gillis-Davis, J. J.; Solomon, S. C.; Strom, R. G.; Gwinner, K.
2011-12-01
Flybys of Mercury by the Mariner 10 and MESSENGER spacecraft revealed a broad distribution of contractional tectonic landforms, including lobate scarps, high-relief ridges, and wrinkle ridges. Among these, lobate scarps were seen as the dominant features and have been interpreted as having formed as a result of global contraction in response to interior cooling. Extensional troughs and graben, where identified, were generally confined to intermediate- to large-scale impact basins. However, the true global spatial distribution of tectonic landforms remained poorly defined because the flyby observations were limited in coverage and spatial resolution, and many flyby images were obtained under lighting geometries far from ideal for the detection and identification of morphologic features. With the successful insertion of MESSENGER into orbit in March 2011, we are exploiting the opportunity to characterize the tectonics of Mercury in unprecedented detail using images at high resolution and optimum lighting, together with topographic data obtained from Mercury Laser Altimeter (MLA) profiles and stereo imaging. We are digitizing all of Mercury's major tectonic landforms in a standard geographic information system format from controlled global monochrome mosaics (mean resolution 250 m/px), complemented by high-resolution targeted images (up to ~10 m/px), obtained by the Mercury Dual Imaging System (MDIS) cameras. On the basis of an explicit set of diagnostic criteria, we are mapping wrinkle ridges, high-relief ridges, lobate scarps, and extensional troughs and graben in separate shapefiles and cataloguing the segment endpoint positions, length, and orientation for each landform. The versatility of digital mapping facilitates the merging of this tectonic information with other MESSENGER-derived map products, e.g., volcanic units, surface color, geochemical variations, topography, and gravity. Results of this mapping work to date include the identification of extensional features in the northern plains and elsewhere on Mercury in the form of troughs, which commonly form polygonal patterns, in some two dozen volcanically flooded impact craters and basins.
Microscale Effects from Global Hot Plasma Imagery
NASA Technical Reports Server (NTRS)
Moore, T. E.; Fok, M.-C.; Perez, J. D.; Keady, J. P.
1995-01-01
We have used a three-dimensional model of recovery phase storm hot plasmas to explore the signatures of pitch angle distributions (PADS) in global fast atom imagery of the magnetosphere. The model computes mass, energy, and position-dependent PADs based on drift effects, charge exchange losses, and Coulomb drag. The hot plasma PAD strongly influences both the storm current system carried by the hot plasma and its time evolution. In turn, the PAD is strongly influenced by plasma waves through pitch angle diffusion, a microscale effect. We report the first simulated neutral atom images that account for anisotropic PADs within the hot plasma. They exhibit spatial distribution features that correspond directly to the PADs along the lines of sight. We investigate the use of image brightness distributions along tangent-shell field lines to infer equatorial PADS. In tangent-shell regions with minimal spatial gradients, reasonably accurate PADs are inferred from simulated images. They demonstrate the importance of modeling PADs for image inversion and show that comparisons of models with real storm plasma images will reveal the global effects of these microscale processes.
Global map of eolian features on Mars.
Ward, A.W.; Doyle, K.B.; Helm, P.J.; Weisman, M.K.; Witbeck, N.E.
1985-01-01
Ten basic categories of eolian features on Mars were identified from a survey of Mariner 9 and Viking orbiter images. The ten features mapped are 1) light streaks (including frost streaks), 2) dark streaks, 3) sand sheets or splotches, 4) barchan dunes, 5) transverse dunes, 6) crescentic dunes, 7) anomalous dunes, 8) yardangs, 9) wind grooves, and 10) deflation pits. The features were mapped in groups, not as individual landforms, and recorded according to their geographic positions and orientations on maps of 1:12.5 million or 1:25 million scale. -from Authors
NASA Technical Reports Server (NTRS)
2006-01-01
26 February 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows gullies formed in the wall of a depression located on the floor of Rabe Crater west of the giant impact basin, Hellas Planitia. Gullies such as these are common features on Mars, but the process by which they are formed is not fully understood. The debate centers on the role and source of fluids in the genesis of these features. Location near: 44.1oS, 325.9oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern SummerAtmosphere-based image classification through luminance and hue
NASA Astrophysics Data System (ADS)
Xu, Feng; Zhang, Yujin
2005-07-01
In this paper a novel image classification system is proposed. Atmosphere serves an important role in generating the scene"s topic or in conveying the message behind the scene"s story, which belongs to abstract attribute level in semantic levels. At first, five atmosphere semantic categories are defined according to rules of photo and film grammar, followed by global luminance and hue features. Then the hierarchical SVM classifiers are applied. In each classification stage, corresponding features are extracted and the trained linear SVM is implemented, resulting in two classes. After three stages of classification, five atmosphere categories are obtained. At last, the text annotation of the atmosphere semantics and the corresponding features by Extensible Markup Language (XML) in MPEG-7 is defined, which can be integrated into more multimedia applications (such as searching, indexing and accessing of multimedia content). The experiment is performed on Corel images and film frames. The classification results prove the effectiveness of the definition of atmosphere semantic classes and the corresponding features.
Deep supervised dictionary learning for no-reference image quality assessment
NASA Astrophysics Data System (ADS)
Huang, Yuge; Liu, Xuesong; Tian, Xiang; Zhou, Fan; Chen, Yaowu; Jiang, Rongxin
2018-03-01
We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.
Wind Tunnel Measurements of Shuttle Orbiter Global Heating with Comparisons to Flight
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Merski, N. Ronald; Blanchard, Robert C.
2002-01-01
An aerothermodynamic database of global heating images was acquired of the Shuttle Orbiter in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel. These results were obtained for comparison to the global infrared images of the Orbiter in flight from the infrared sensing aeroheating flight experiment (ISAFE). The most recent ISAFE results from STS-103, consisted of port side images, at hypersonic conditions, of the surface features that result from the strake vortex scrubbing along the side of the vehicle. The wind tunnel results were obtained with the phosphor thermography system, which also provides global information and thus is ideally suited for comparison to the global flight results. The aerothermodynamic database includes both windward and port side heating images of the Orbiter for a range of angles of attack (20 to 40 deg), freestream unit Reynolds number (1 x 10(exp 6))/ft to 8 x 10(exp 6)/ft, body flap deflections (0, 5, and 10 deg), speed brake deflections (0 and 45 deg), as well as with boundary layer trips for forced transition to turbulence heating results. Sample global wind tunnel heat transfer images were extrapolated to flight conditions for comparison to Orbiter flight data. A windward laminar case for an angle of attack of 40 deg was extrapolated to Mach 11.6 flight conditions for comparison to STS-2 flight thermocouple results. A portside wind tunnel image for an angle of attack of 25 deg was extrapolated for Mach 5 flight conditions for comparison to STS-103 global surface temperatures. The comparisons showed excellent qualitative agreement, however the extrapolated wind tunnel results over-predicted the flight surface temperatures on the order of 5% on the windward surface and slightly higher on the portside.
NASA Astrophysics Data System (ADS)
Sheng, Yehua; Zhang, Ka; Ye, Chun; Liang, Cheng; Li, Jian
2008-04-01
Considering the problem of automatic traffic sign detection and recognition in stereo images captured under motion conditions, a new algorithm for traffic sign detection and recognition based on features and probabilistic neural networks (PNN) is proposed in this paper. Firstly, global statistical color features of left image are computed based on statistics theory. Then for red, yellow and blue traffic signs, left image is segmented to three binary images by self-adaptive color segmentation method. Secondly, gray-value projection and shape analysis are used to confirm traffic sign regions in left image. Then stereo image matching is used to locate the homonymy traffic signs in right image. Thirdly, self-adaptive image segmentation is used to extract binary inner core shapes of detected traffic signs. One-dimensional feature vectors of inner core shapes are computed by central projection transformation. Fourthly, these vectors are input to the trained probabilistic neural networks for traffic sign recognition. Lastly, recognition results in left image are compared with recognition results in right image. If results in stereo images are identical, these results are confirmed as final recognition results. The new algorithm is applied to 220 real images of natural scenes taken by the vehicle-borne mobile photogrammetry system in Nanjing at different time. Experimental results show a detection and recognition rate of over 92%. So the algorithm is not only simple, but also reliable and high-speed on real traffic sign detection and recognition. Furthermore, it can obtain geometrical information of traffic signs at the same time of recognizing their types.
Global Mosaics of Pluto and Charon
2017-07-14
Global mosaics of Pluto and Charon projected at 300 meters (985 feet) per pixel that have been assembled from most of the highest resolution images obtained by the Long-Range Reconnaissance Imager (LORRI) and the Multispectral Visible Imaging Camera (MVIC) onboard New Horizons. Transparent, colorized stereo topography data generated for the encounter hemispheres of Pluto and Charon have been overlain on the mosaics. Terrain south of about 30°S on Pluto and Charon was in darkness leading up to and during the flyby, so is shown in black. "S" and "T" respectively indicate Sputnik Planitia and Tartarus Dorsa on Pluto, and "C" indicates Caleuche Chasma on Charon. All feature names on Pluto and Charon are informal. https://photojournal.jpl.nasa.gov/catalog/PIA21862
Association between mammogram density and background parenchymal enhancement of breast MRI
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Danala, Gopichandh; Wang, Yunzhi; Zarafshani, Ali; Qian, Wei; Liu, Hong; Zheng, Bin
2018-02-01
Breast density has been widely considered as an important risk factor for breast cancer. The purpose of this study is to examine the association between mammogram density results and background parenchymal enhancement (BPE) of breast MRI. A dataset involving breast MR images was acquired from 65 high-risk women. Based on mammography density (BIRADS) results, the dataset was divided into two groups of low and high breast density cases. The Low-Density group has 15 cases with mammographic density (BIRADS 1 and 2), while the High-density group includes 50 cases, which were rated by radiologists as mammographic density BIRADS 3 and 4. A computer-aided detection (CAD) scheme was applied to segment and register breast regions depicted on sequential images of breast MRI scans. CAD scheme computed 20 global BPE features from the entire two breast regions, separately from the left and right breast region, as well as from the bilateral difference between left and right breast regions. An image feature selection method namely, CFS method, was applied to remove the most redundant features and select optimal features from the initial feature pool. Then, a logistic regression classifier was built using the optimal features to predict the mammogram density from the BPE features. Using a leave-one-case-out validation method, the classifier yields the accuracy of 82% and area under ROC curve, AUC=0.81+/-0.09. Also, the box-plot based analysis shows a negative association between mammogram density results and BPE features in the MRI images. This study demonstrated a negative association between mammogram density and BPE of breast MRI images.
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Stoug, Rebecca G.; Pearce, Melanie; Liu, Hong; Zheng, Bin
2018-03-01
Although breast magnetic resonance imaging (MRI) has been used as a breast cancer screening modality for high-risk women, its cancer detection yield remains low (i.e., <= 3%). Thus, increasing breast MRI screening efficacy and cancer detection yield is an important clinical issue in breast cancer screening. In this study, we investigated association between the background parenchymal enhancement (BPE) of breast MRI and the change of diagnostic (BIRADS) status in the next subsequent breast MRI screening. A dataset with 65 breast MRI screening cases was retrospectively assembled. All cases were rated BIRADS-2 (benign findings). In the subsequent screening, 4 cases were malignant (BIRADS-6), 48 remained BIRADS-2 and 13 were downgraded to negative (BIRADS-1). A computer-aided detection scheme was applied to process images of the first set of breast MRI screening. Total of 33 features were computed including texture feature and global BPE features. Texture features were computed from either a gray-level co-occurrence matrix or a gray level run length matrix. Ten global BPE features were also initially computed from two breast regions and bilateral difference between the left and right breasts. Box-plot based analysis shows positive association between texture features and BIRADS rating levels in the second screening. Furthermore, a logistic regression model was built using optimal features selected by a CFS based feature selection method. Using a leave-one-case-out based cross-validation method, classification yielded an overall 75% accuracy in predicting the improvement (or downgrade) of diagnostic status (to BIRAD-1) in the subsequent breast MRI screening. This study demonstrated potential of developing a new quantitative imaging marker to predict diagnostic status change in the short-term, which may help eliminate a high fraction of unnecessary repeated breast MRI screenings and increase the cancer detection yield.
High contrast imaging through adaptive transmittance control in the focal plane
NASA Astrophysics Data System (ADS)
Dhadwal, Harbans S.; Rastegar, Jahangir; Feng, Dake
2016-05-01
High contrast imaging, in the presence of a bright background, is a challenging problem encountered in diverse applications ranging from the daily chore of driving into a sun-drenched scene to in vivo use of biomedical imaging in various types of keyhole surgeries. Imaging in the presence of bright sources saturates the vision system, resulting in loss of scene fidelity, corresponding to low image contrast and reduced resolution. The problem is exacerbated in retro-reflective imaging systems where the light sources illuminating the object are unavoidably strong, typically masking the object features. This manuscript presents a novel theoretical framework, based on nonlinear analysis and adaptive focal plane transmittance, to selectively remove object domain sources of background light from the image plane, resulting in local and global increases in image contrast. The background signal can either be of a global specular nature, giving rise to parallel illumination from the entire object surface or can be represented by a mosaic of randomly orientated, small specular surfaces. The latter is more representative of real world practical imaging systems. Thus, the background signal comprises of groups of oblique rays corresponding to distributions of the mosaic surfaces. Through the imaging system, light from group of like surfaces, converges to a localized spot in the focal plane of the lens and then diverges to cast a localized bright spot in the image plane. Thus, transmittance of a spatial light modulator, positioned in the focal plane, can be adaptively controlled to block a particular source of background light. Consequently, the image plane intensity is entirely due to the object features. Experimental image data is presented to verify the efficacy of the methodology.
NASA Technical Reports Server (NTRS)
2004-01-01
8 January 2004 This is how Mars appeared to the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle system on 25 December 2003, the day that Beagle 2 and Mars Express reached the red planet. The large, dark region just left of center is Syrtis Major, a persistent low albedo terrain known to astronomers for nearly four centuries before the first spacecraft went to Mars. Immediately to the right (east) of Syrtis Major is the somewhat circular plain, Isidis Planitia. Beagle 2 arrived in Isidis Planitia only about 18 minutes before Mars Global Surveyor flew over the region and acquired a portion of this global view. Relative to other global images of Mars acquired by MGS over the past several martian years, the surface features were not as sharp and distinct on 25 December 2003 because of considerable haze kicked up by large dust storms in the western and southern hemispheres during th previous two weeks. The picture is a composite of several MGS MOC red and blue daily global images that have been map-projected and digitally wrapped to a sphere. Although the effect here is minor, inspection of this mosaic shows zones that appear smudged or blurry. The high dust opacity on 25 December impacted MOC's oblique viewing geometry toward the edges of each orbit's daily global mapping image, thus emphasizing the 'blurry' zones between images acquired on successive orbits.
Global Patterns of Tectonism on Titan from Mountain Chains and Virgae
NASA Technical Reports Server (NTRS)
Cook, C.; Barnes, J. W.; Radebaugh, J.; Hurford, T.; Ktatenhorn, S. A.
2012-01-01
This research is based on the exploration of tectonic patterns on Titan from a global perspective. Several moons in the outer solar system display patterns of surface tectonic features that imply global stress fields driven or modified by global forces. Patterns such as these are seen in Europa's tidally induced fracture patterns, Enceladus's tiger stripes, and Ganymede's global expansion induced normal fault bands. Given its proximity to Saturn, as well as its eccentric orbit, tectonic features and global stresses may be present on Titan as well. Titan displays possible tectonic structures, such as mountain chains along its equator (Radebaugh et al. 2007), as well as the unexplored dark linear streaks termed virgae by the IAU. Imaged by Cassini with the RADAR instrument, mountain chains near the equator are observed with a predominante east-west orientation (Liu et al. 2012, Mitri et al. 2010). Orientations such as these can be explained by modifications in the global tidal stress field induced by global contraction followed by rotational spin-up. Also, due to Titan's eccentric orbit, its current rotation rate may be in an equilibrium between tidal spin-up near periapsis and spin-down near apoapsis (Barnes and Fortney 2003). Additional stress from rotational spin-up provides an asymmetry to the stress field. This, combined with an isotropic stress from radial contraction, favors the formation of equatorial mountain chains in an east-west direction. The virgae, which have been imaged by Cassini with both the Visual and Infrared Mapping Spectrometer (VIMS) and Imaging Science Subsystem (ISS) instruments, are located predominately near 30 degrees latitude in either hemisphere. Oriented with a pronounced elongation in the east-west direction, all observed virgae display similar characteristics: similar relative albedos as the surrounding terrain however darkened with an apparent neutral absorber, broken-linear or rounded sharp edges, and connected, angular elements with distinct, linear edges. Virgae imaged during northern latitude passes are oriented with their long dimensions toward Titan's antiSaturn point. If the virgae are of tectonic origin, for instance if the turn out to be i.e. grabens, they could serve as markers to Titan's global stress field. Using them in this way allows for a mapping of global tectonic patterns. These patterns will be tested for consistency against the various sources of global stress and orientations of mountain chains. By determining what drives Titan's tectonics globally, we will be able to place Titan within the context of the other outer planet icy satellites.
Whistlers observed outside the plasmasphere: Correlation to plasmaspheric/plasmapause features
NASA Astrophysics Data System (ADS)
Adrian, M. L.; Fung, S. F.; Gallagher, D. L.; Green, J. L.
2015-09-01
Whistlers observed outside the plasmasphere by Cluster have been correlated with the global plasmasphere using Imager for Magnetopause-to-Aurora Global Exploration-Extreme Ultraviolet Imager (IMAGE-EUV) observations. Of the 12 Cluster-observed whistler events reported, EUV is able to provide global imaging of the plasmasphere for every event and demonstrates a direct correlation between the detection of lightning-generated whistlers beyond the plasmapause and the presence of a global perturbation of the local plasmapause. Of these 12 correlated events, seven of the Cluster-observed whistlers (or 58%) are associated with the Cluster spacecraft lying radially outward from a plasmaspheric notch. Two of the Cluster-observed whistlers (17%) are associated with the low-density region between the late afternoon plasmapause and the western wall of a plasmaspheric drainage plume. The final three Cluster-observed whistler events (25%) are associated with a nonradial, nonazimuthal depletion in plasmaspheric He+ emission that are termed "notch-like" crenulations. In one of these cases, the notch-like crenulations appear to be manifestations entrained within the plasmasphere boundary layer of a standing wave on the surface of the plasmasphere. The correlated Cluster/IMAGE-EUV observations suggest that the depleted flux tubes that connect the ionosphere to the low-density regions of plasmaspheric trough and inner magnetosphere facilitate the escape of whistler waves from the plasmasphere.
Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency
Sripati, Arun P.; Olson, Carl R.
2010-01-01
Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054
NASA Astrophysics Data System (ADS)
Szu, Harold H.
1993-09-01
Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.
Quantitative diagnosis of bladder cancer by morphometric analysis of HE images
NASA Astrophysics Data System (ADS)
Wu, Binlin; Nebylitsa, Samantha V.; Mukherjee, Sushmita; Jain, Manu
2015-02-01
In clinical practice, histopathological analysis of biopsied tissue is the main method for bladder cancer diagnosis and prognosis. The diagnosis is performed by a pathologist based on the morphological features in the image of a hematoxylin and eosin (HE) stained tissue sample. This manuscript proposes algorithms to perform morphometric analysis on the HE images, quantify the features in the images, and discriminate bladder cancers with different grades, i.e. high grade and low grade. The nuclei are separated from the background and other types of cells such as red blood cells (RBCs) and immune cells using manual outlining, color deconvolution and image segmentation. A mask of nuclei is generated for each image for quantitative morphometric analysis. The features of the nuclei in the mask image including size, shape, orientation, and their spatial distributions are measured. To quantify local clustering and alignment of nuclei, we propose a 1-nearest-neighbor (1-NN) algorithm which measures nearest neighbor distance and nearest neighbor parallelism. The global distributions of the features are measured using statistics of the proposed parameters. A linear support vector machine (SVM) algorithm is used to classify the high grade and low grade bladder cancers. The results show using a particular group of nuclei such as large ones, and combining multiple parameters can achieve better discrimination. This study shows the proposed approach can potentially help expedite pathological diagnosis by triaging potentially suspicious biopsies.
NASA Technical Reports Server (NTRS)
2005-01-01
16 October 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows streamlined landforms carved by catastrophic floods that occurred in the eastern Cerberus region, some time in the distant martian past. Location near: 15.1oN, 193.5oW Image width: width: 3 km (1.9 mi) Illumination from: lower left Season: Northern AutumnNASA Astrophysics Data System (ADS)
Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.
2015-12-01
The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin
2017-01-01
The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography (FFDM) images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a “scoring fusion” artificial neural network (ANN) classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793±0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions. PMID:27997380
Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.
2014-01-01
An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974
NASA Astrophysics Data System (ADS)
Tan, Maxine; Leader, Joseph K.; Liu, Hong; Zheng, Bin
2015-03-01
We recently investigated a new mammographic image feature based risk factor to predict near-term breast cancer risk after a woman has a negative mammographic screening. We hypothesized that unlike the conventional epidemiology-based long-term (or lifetime) risk factors, the mammographic image feature based risk factor value will increase as the time lag between the negative and positive mammography screening decreases. The purpose of this study is to test this hypothesis. From a large and diverse full-field digital mammography (FFDM) image database with 1278 cases, we collected all available sequential FFDM examinations for each case including the "current" and 1 to 3 most recently "prior" examinations. All "prior" examinations were interpreted negative, and "current" ones were either malignant or recalled negative/benign. We computed 92 global mammographic texture and density based features, and included three clinical risk factors (woman's age, family history and subjective breast density BIRADS ratings). On this initial feature set, we applied a fast and accurate Sequential Forward Floating Selection (SFFS) feature selection algorithm to reduce feature dimensionality. The features computed on both mammographic views were individually/ separately trained using two artificial neural network (ANN) classifiers. The classification scores of the two ANNs were then merged with a sequential ANN. The results show that the maximum adjusted odds ratios were 5.59, 7.98, and 15.77 for using the 3rd, 2nd, and 1st "prior" FFDM examinations, respectively, which demonstrates a higher association of mammographic image feature change and an increasing risk trend of developing breast cancer in the near-term after a negative screening.
Coupled binary embedding for large-scale image retrieval.
Zheng, Liang; Wang, Shengjin; Tian, Qi
2014-08-01
Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.
NASA Technical Reports Server (NTRS)
2006-01-01
17 July 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a bright plain west of Schiaparelli Crater, Mars, which is host to several features, some of them long-lived and others that are transient. The circular features scattered somewhat randomly throughout the scene are impact craters, all of which are in a variety of states of degradation. In the lower left (southwest) corner of the image, there is a small hill surrounded by ripples of windblown sediment, and near the center of the image, there is an active dust devil casting a shadow to the east as it makes its way across the plain. Location near: 5.9oS, 348.2oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern AutumnMeaning of Interior Tomography
Wang, Ge; Yu, Hengyong
2013-01-01
The classic imaging geometry for computed tomography is for collection of un-truncated projections and reconstruction of a global image, with the Fourier transform as the theoretical foundation that is intrinsically non-local. Recently, interior tomography research has led to theoretically exact relationships between localities in the projection and image spaces and practically promising reconstruction algorithms. Initially, interior tomography was developed for x-ray computed tomography. Then, it has been elevated as a general imaging principle. Finally, a novel framework known as “omni-tomography” is being developed for grand fusion of multiple imaging modalities, allowing tomographic synchrony of diversified features. PMID:23912256
NASA Technical Reports Server (NTRS)
2005-01-01
27 February 2005 This Mars Global Surveyor (MGS) Orbiter Camera (MOC) image shows wind streaks and a thick mantling of dust in the summit region of the martian volcano, Pavonis Mons. The surface texture gives the impression that the MOC image is blurry, but several very small, sharp impact craters reveal that the picture is not blurry. Location near: 1.1oN, 113.2oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Northern SummerNASA Technical Reports Server (NTRS)
Limaye, Sanjay S.
1996-01-01
The objective of this research was to investigate the temporal behavior of the impact features on Jupiter created by the fragments of the Shoemaker Levy-9 comet that collided with the planet in July 1994. The primary observations used in the study were ground based images of Jupiter acquired from the Swedish Solar Vacuum Tube on the island of La Palma in the Canary Islands. The measurement of position of the impact features in images acquired immediately after the impact over a period of a few days revealed that the apparent drift rates were too high and that a repetitive pattern could be seen in the longitude position on successive rotations. This could be explained only by the fact that the measured longitudes of the impact sites were being affected by parallax due to a significant elevation of the impact debris above the nominal cloud top altitude value used for image navigation. Once the apparent positions are analyzed as a function of the meridian angle, the parallax equation can be used to infer the height of the impact features above the cloud deck, once the true impact position (longitude) for the feature is known. Due to their inherent high spatial resolution, the HST measurements of the impact site locations have been accepted widely. However, these suffer from the parallax themselves since few of them were obtained at central meridian. Ground based imaging have the potential to improve this knowledge as they do observe most of the impact sites on either side of the central meridian, except for the degraded resolution. Measurements over a large number of images enables us to minimize the position error through regression and thus estimate both the actual impact site location devoid of parallax bias, and also of the altitude level of the impact debris above the cloud deck. With rapid imaging there is the potential to examine the time evolution of the altitude level. Several hundred ground based images were processed, navigated and subjected to the impact site location measurements. HST images were also acquired and used to calibrate the results and to improve the sample. The resources available enabled an in-depth study only of impact site A, however, many more images have since become available through the global network observations through Lowell Observatory.
Pneumothorax detection in chest radiographs using local and global texture signatures
NASA Astrophysics Data System (ADS)
Geva, Ofer; Zimmerman-Moreno, Gali; Lieberman, Sivan; Konen, Eli; Greenspan, Hayit
2015-03-01
A novel framework for automatic detection of pneumothorax abnormality in chest radiographs is presented. The suggested method is based on a texture analysis approach combined with supervised learning techniques. The proposed framework consists of two main steps: at first, a texture analysis process is performed for detection of local abnormalities. Labeled image patches are extracted in the texture analysis procedure following which local analysis values are incorporated into a novel global image representation. The global representation is used for training and detection of the abnormality at the image level. The presented global representation is designed based on the distinctive shape of the lung, taking into account the characteristics of typical pneumothorax abnormalities. A supervised learning process was performed on both the local and global data, leading to trained detection system. The system was tested on a dataset of 108 upright chest radiographs. Several state of the art texture feature sets were experimented with (Local Binary Patterns, Maximum Response filters). The optimal configuration yielded sensitivity of 81% with specificity of 87%. The results of the evaluation are promising, establishing the current framework as a basis for additional improvements and extensions.
NASA Astrophysics Data System (ADS)
Ahern, A.; Radebaugh, J.; Christiansen, E. H.; Harris, R. A.
2015-12-01
Paterae and mountains are some of the most distinguishing and well-distributed surface features on Io, and they reveal the role of tectonism in Io's crust. Paterae, similar to calderas, are volcano-tectonic collapse features that often have straight margins. Io's mountains are some of the highest in the solar system and contain linear features that reveal crustal stresses. Paterae and mountains are often found adjacent to one another, suggesting possible genetic relationships. We have produced twelve detailed regional structural maps from high-resolution images of relevant features, where available, as well as a global structural map from the Io Global Color Mosaic. The regional structural maps identify features such as fractures, lineations, folds, faults, and mass wasting scarps, which are then interpreted in the context of global and regional stress regimes. A total of 1048 structural lineations have been identified globally. Preliminary analyses of major thrust and normal fault orientations are dominantly 90° offset from each other, suggesting the maximum contractional stresses leading to large mountain formation are not a direct result of tidal extension. Rather, these results corroborate the model of volcanic loading of the crust and global shortening, leading to thrust faulting and uplift of coherent crustal blocks. Several paterae, such as Hi'iaka and Tohil, are found adjacent to mountains inside extensional basins where lava has migrated up normal faults to erupt onto patera floors. Over time, mass wasting and volcanic resurfacing can change mountains from young, steep, and angular peaks to older, gentler, and more rounded hills. Mass wasting scarps make up 53% of all features identified. The structural maps highlight the significant effect of mass wasting on Io's surface, the evolution of mountains through time, the role of tectonics in the formation of paterae, and the formation of mountains through global contraction due to volcanism.
The Power of Kawaii: Viewing Cute Images Promotes a Careful Behavior and Narrows Attentional Focus
Nittono, Hiroshi; Fukushima, Michiko; Yano, Akihiro; Moriya, Hiroki
2012-01-01
Kawaii (a Japanese word meaning “cute”) things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE = 43.9±10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9±5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7±2.2% improvement) than after viewing less cute images (1.4±2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2±2.1%). In the third experiment, participants performed a global–local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work. PMID:23050022
Albedo Study of the Depositional Fans Associated with Martian Gullies
NASA Astrophysics Data System (ADS)
Craig, J.; Sears, D. W. G.
2005-03-01
This work is a two-part investigation of the albedo of the depositional aprons or fans associated with Martian gully features. Using Adobe Systems Photoshop 5.0 software we analyzed numerous Mars Global Surveyor MOC and Mars Odyssey THEMIS images.
Content-based image retrieval by matching hierarchical attributed region adjacency graphs
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Thies, Christian J.; Guld, Mark O.; Lehmann, Thomas M.
2004-05-01
Content-based image retrieval requires a formal description of visual information. In medical applications, all relevant biological objects have to be represented by this description. Although color as the primary feature has proven successful in publicly available retrieval systems of general purpose, this description is not applicable to most medical images. Additionally, it has been shown that global features characterizing the whole image do not lead to acceptable results in the medical context or that they are only suitable for specific applications. For a general purpose content-based comparison of medical images, local, i.e. regional features that are collected on multiple scales must be used. A hierarchical attributed region adjacency graph (HARAG) provides such a representation and transfers image comparison to graph matching. However, building a HARAG from an image requires a restriction in size to be computationally feasible while at the same time all visually plausible information must be preserved. For this purpose, mechanisms for the reduction of the graph size are presented. Even with a reduced graph, the problem of graph matching remains NP-complete. In this paper, the Similarity Flooding approach and Hopfield-style neural networks are adapted from the graph matching community to the needs of HARAG comparison. Based on synthetic image material build from simple geometric objects, all visually similar regions were matched accordingly showing the framework's general applicability to content-based image retrieval of medical images.
Efficient Method for Scalable Registration of Remote Sensing Images
NASA Astrophysics Data System (ADS)
Prouty, R.; LeMoigne, J.; Halem, M.
2017-12-01
The goal of this project is to build a prototype of a resource-efficient pipeline that will provide registration within subpixel accuracy of multitemporal Earth science data. Accurate registration of Earth-science data is imperative to proper data integration and seamless mosaicing of data from multiple times, sensors, and/or observation geometries. Modern registration methods make use of many arithmetic operations and sometimes require complete knowledge of the image domain. As such, while sensors become more advanced and are able to provide higher-resolution data, the memory resources required to properly register these data become prohibitive. The proposed pipeline employs a region of interest extraction algorithm in order to extract image subsets with high local feature density. These image subsets are then used to generate local solutions to the global registration problem. The local solutions are then 'globalized' to determine the deformation model that best solves the registration problem. The region of interest extraction and globalization routines are tested for robustness among the variety of scene-types and spectral locations provided by Earth-observing instruments such as Landsat, MODIS, or ASTER.
NASA Technical Reports Server (NTRS)
2004-01-01
23 October 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows light-toned rock outcrops, possibly sedimentary rocks, in the Arsinoes Chaos region east of the Valles Marineris trough system. These rocky materials were once below the martian surface. These features are located near 7.2oS, 27.9oW. The image covers an area about 3 km (1.9 mi) wide. Sunlight illuminates the scene from the upper left.Lunar geodesy and cartography: a new era
NASA Astrophysics Data System (ADS)
Duxbury, Thomas; Smith, David; Robinson, Mark; Zuber, Maria T.; Neumann, Gregory; Danton, Jacob; Oberst, Juergen; Archinal, Brent; Glaeser, Philipp
The Lunar Reconnaissance Orbiter (LRO) ushers in a new era in precision lunar geodesy and cartography. LRO was launched in June, 2009, completed its Commissioning Phase in Septem-ber 2009 and is now in its Primary Mission Phase on its way to collecting high precision, global topographic and imaging data. Aboard LRO are the Lunar Orbiter Laser Altimeter (LOLA -Smith, et al., 2009) and the Lunar Reconnaissance Orbiter Camera (LROC -Robinson, et al., ). LOLA is a derivative of the successful MOLA at Mars that produced the global reference surface being used for all precision cartographic products. LOLA produces 5 altimetry spots having footprints of 5 m at a frequency of 28 Hz, significantly bettering MOLA that produced 1 spot having a footprint of 150 m at a frequency of 10 Hz. LROC has twin narrow angle cameras having pixel resolutions of 0.5 meters from a 50 km orbit and a wide-angle camera having a pixel resolution of 75 m and in up to 7 color bands. One of the two NACs looks to the right of nadir and the other looks to the left with a few hundred pixel overlap in the nadir direction. LOLA is mounted on the LRO spacecraft to look nadir, in the overlap region of the NACs. The LRO spacecraft has the ability to look nadir and build up global coverage as well as looking off-nadir to provide stereo coverage and fill in data gaps. The LROC wide-angle camera builds up global stereo coverage naturally from its large field-of-view overlap from orbit to orbit during nadir viewing. To date, the LROC WAC has already produced global stereo coverage of the lunar surface. This report focuses on the registration of LOLA altimetry to the LROC NAC images. LOLA has a dynamic range of tens of km while producing elevation data at sub-meter precision. LOLA also has good return in off-nadir attitudes. Over the LRO mission, multiple LOLA tracks will be in each of the NAC images at the lunar equator and even more tracks in the NAC images nearer the poles. The registration of LOLA altimetry to NAC images is aided by the 5 spots showing regional and local slopes, along and cross-track, that are easily correlated visually to features within the images. Once can precisely register each of the 5 LOLA spots to specific pixels in LROC images of distinct features such as craters and boulders. This can be performed routinely for features at the 100 m level and larger. However, even features at the several m level can also be registered if a single LOLA spots probes the depth of a small crater while the other 4 spots are on the surrounding surface or one spot returns from the top of a small boulder seen by NAC. The automatic registration of LOLA tracks with NAC stereo digital terrain models should provide for even higher accuracy. Also the LOLA pulse spread of the returned signal, which is sensitive to slopes and roughness, is an additional source of information to help match the LOLA tracks to the images As the global coverage builds, LOLA will provide absolute coordinates in latitude, longitude and radius of surface features with accuracy at the meter level or better. The NAC images will then be reg-istered to the LOLA reference surface in the production of precision, controlled photomosaics, having spatial resolutions as good as 0.5 m/pixel. For hundreds of strategic sites viewed in stereo, even higher precision and more complete surface coverage is possible for the produc-tion of digital terrain models and mosaics. LRO, with LOLA and LROC, will improve the relative and absolute accuracy of geodesy and cartography by orders of magnitude, ushering in a new era for lunar geodesy and cartography. Robinson, M., et al., Space Sci. Rev., DOI 10.1007/s11214-010-9634-2, Date: 2010-02-23, in press. Smith, D., et al., Space Sci. Rev., DOI 10.1007/s11214-009-9512-y, published online 16 May 2009.
Face recognition algorithm based on Gabor wavelet and locality preserving projections
NASA Astrophysics Data System (ADS)
Liu, Xiaojie; Shen, Lin; Fan, Honghui
2017-07-01
In order to solve the effects of illumination changes and differences of personal features on the face recognition rate, this paper presents a new face recognition algorithm based on Gabor wavelet and Locality Preserving Projections (LPP). The problem of the Gabor filter banks with high dimensions was solved effectively, and also the shortcoming of the LPP on the light illumination changes was overcome. Firstly, the features of global image information were achieved, which used the good spatial locality and orientation selectivity of Gabor wavelet filters. Then the dimensions were reduced by utilizing the LPP, which well-preserved the local information of the image. The experimental results shown that this algorithm can effectively extract the features relating to facial expressions, attitude and other information. Besides, it can reduce influence of the illumination changes and the differences in personal features effectively, which improves the face recognition rate to 99.2%.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
Observational Tests of the Mars Ocean Hypothesis: Selected MOC and MOLA Results
NASA Technical Reports Server (NTRS)
Parker, T. J.; Banerdt, W. B.
1999-01-01
We have begun a detailed analysis of the evidence for and topography of features identified as potential shorelines that have been im-aged by the Mars Orbiter Camera (MOC) during the Aerobraking Hiatus and Science Phasing Orbit periods of the Mars Global Surveyor (MGS) mission. MOC images, comparable in resolution to high-altitude terrestrial aerial photographs, are particularly well suited to address the morphological expressions of these features at scales comparable to known shore morphologies on Earth. Particularly useful are examples of detailed relationships between potential shore features, such as erosional (and depositional) terraces have been cut into "familiar" pre-existing structures and topography in a fashion that points to a shoreline interpretation as the most likely mechanism for their formation. Additional information is contained in the original extended abstract.
SDL: Saliency-Based Dictionary Learning Framework for Image Similarity.
Sarkar, Rituparna; Acton, Scott T
2018-02-01
In image classification, obtaining adequate data to learn a robust classifier has often proven to be difficult in several scenarios. Classification of histological tissue images for health care analysis is a notable application in this context due to the necessity of surgery, biopsy or autopsy. To adequately exploit limited training data in classification, we propose a saliency guided dictionary learning method and subsequently an image similarity technique for histo-pathological image classification. Salient object detection from images aids in the identification of discriminative image features. We leverage the saliency values for the local image regions to learn a dictionary and respective sparse codes for an image, such that the more salient features are reconstructed with smaller error. The dictionary learned from an image gives a compact representation of the image itself and is capable of representing images with similar content, with comparable sparse codes. We employ this idea to design a similarity measure between a pair of images, where local image features of one image, are encoded with the dictionary learned from the other and vice versa. To effectively utilize the learned dictionary, we take into account the contribution of each dictionary atom in the sparse codes to generate a global image representation for image comparison. The efficacy of the proposed method was evaluated using three tissue data sets that consist of mammalian kidney, lung and spleen tissue, breast cancer, and colon cancer tissue images. From the experiments, we observe that our methods outperform the state of the art with an increase of 14.2% in the average classification accuracy over all data sets.
Global gray-level thresholding based on object size.
Ranefall, Petter; Wählby, Carolina
2016-04-01
In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
The tectonics of Venus: An overview
NASA Technical Reports Server (NTRS)
Solomon, Sean C.
1992-01-01
While the Pioneer Venus altimeter, Earth-based radar observatories, and the Venera 15-16 orbital imaging radars provided views of large-scale tectonic features on Venus at ever-increasing resolution, the radar images from Magellan constitute an improvement in resolution of at least an order of magnitude over the best previously available. A summary of early Magellan observations of tectonic features on Venus was published, but data available at that time were restricted to the first month of mapping and represented only about 15 percent of the surface of the planet. Magellan images and altimetry are now available for more than 95 percent of the Venus surface. Thus a more global perspective may be taken on the styles and distribution of lithospheric deformation on Venus and their implications for the tectonic history of the planet.
NASA Technical Reports Server (NTRS)
2006-01-01
10 August 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows two mesas on the northern plains of Mars. 'Mesa' is the Spanish word for 'table,' and that is a very good description of the two elliptical features captured in this MOC image. In both cases, the mesa tops and the material beneath them, down to the level of the surrounding, rugged plain, are remnants of a once more extensive layer (or layers) of material that has been largely eroded away. The circular feature near the center of the larger mesa is the site of a filled and buried impact crater. Location near: 53.5oN, 153.5oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern SpringObokata, Masaru; Nagata, Yasufumi; Wu, Victor Chien-Chia; Kado, Yuichiro; Kurabayashi, Masahiko; Otsuji, Yutaka; Takeuchi, Masaaki
2016-05-01
Cardiac magnetic resonance (CMR) feature tracking (FT) with steady-state free precession (SSFP) has advantages over traditional myocardial tagging to analyse left ventricular (LV) strain. However, direct comparisons of CMRFT and 2D/3D echocardiography speckle tracking (2/3DEST) for measurement of LV strain are limited. The aim of this study was to investigate the feasibility and reliability of CMRFT and 2D/3DEST for measurement of global LV strain. We enrolled 106 patients who agreed to undergo both CMR and 2D/3DE on the same day. SSFP images at multiple short-axis and three apical views were acquired. 2DE images from three levels of short-axis, three apical views, and 3D full-volume datasets were also acquired. Strain data were expressed as absolute values. Feasibility was highest in CMRFT, followed by 2DEST and 3DEST. Analysis time was shortest in 3DEST, followed by CMRFT and 2DEST. There was good global longitudinal strain (GLS) correlation between CMRFT and 2D/3DEST (r = 0.83 and 0.87, respectively) with the limit of agreement (LOA) ranged from ±3.6 to ±4.9%. Excellent global circumferential strain (GCS) correlation between CMRFT and 2D/3DEST was observed (r = 0.90 and 0.88) with LOA of ±6.8-8.5%. Global radial strain showed fair correlations (r = 0.69 and 0.82, respectively) with LOA ranged from ±12.4 to ±16.3%. CMRFT GCS showed least observer variability with highest intra-class correlation. Although not interchangeable, the high GLS and GCS correlation between CMRFT and 2D/3DEST makes CMRFT a useful modality for quantification of global LV strain in patients, especially those with suboptimal echo image quality. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Wang, Juan; Nishikawa, Robert M; Yang, Yongyi
2017-04-01
In computerized detection of clustered microcalcifications (MCs) from mammograms, the traditional approach is to apply a pattern detector to locate the presence of individual MCs, which are subsequently grouped into clusters. Such an approach is often susceptible to the occurrence of false positives (FPs) caused by local image patterns that resemble MCs. We investigate the feasibility of a direct detection approach to determining whether an image region contains clustered MCs or not. Toward this goal, we develop a deep convolutional neural network (CNN) as the classifier model to which the input consists of a large image window ([Formula: see text] in size). The multiple layers in the CNN classifier are trained to automatically extract image features relevant to MCs at different spatial scales. In the experiments, we demonstrated this approach on a dataset consisting of both screen-film mammograms and full-field digital mammograms. We evaluated the detection performance both on classifying image regions of clustered MCs using a receiver operating characteristic (ROC) analysis and on detecting clustered MCs from full mammograms by a free-response receiver operating characteristic analysis. For comparison, we also considered a recently developed MC detector with FP suppression. In classifying image regions of clustered MCs, the CNN classifier achieved 0.971 in the area under the ROC curve, compared to 0.944 for the MC detector. In detecting clustered MCs from full mammograms, at 90% sensitivity, the CNN classifier obtained an FP rate of 0.69 clusters/image, compared to 1.17 clusters/image by the MC detector. These results indicate that using global image features can be more effective in discriminating clustered MCs from FPs caused by various sources, such as linear structures, thereby providing a more accurate detection of clustered MCs on mammograms.
3D shape recovery from image focus using Gabor features
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Mahmood, Jawad; Zeb, Ayesha; Iqbal, Javaid
2018-04-01
Recovering an accurate and precise depth map from a set of acquired 2-D image dataset of the target object each having different focus information is an ultimate goal of 3-D shape recovery. Focus measure algorithm plays an important role in this architecture as it converts the corresponding color value information into focus information which will be then utilized for recovering depth map. This article introduces Gabor features as focus measure approach for recovering depth map from a set of 2-D images. Frequency and orientation representation of Gabor filter features is similar to human visual system and normally applied for texture representation. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach, in spite of simplicity, generates accurate results.
True polar wander on Europa from global-scale small-circle depressions.
Schenk, Paul; Matsuyama, Isamu; Nimmo, Francis
2008-05-15
The tectonic patterns and stress history of Europa are exceedingly complex and many large-scale features remain unexplained. True polar wander, involving reorientation of Europa's floating outer ice shell about the tidal axis with Jupiter, has been proposed as a possible explanation for some of the features. This mechanism is possible if the icy shell is latitudinally variable in thickness and decoupled from the rocky interior. It would impose high stress levels on the shell, leading to predictable fracture patterns. No satisfactory match to global-scale features has hitherto been found for polar wander stress patterns. Here we describe broad arcuate troughs and depressions on Europa that do not fit other proposed stress mechanisms in their current position. Using imaging from three spacecraft, we have mapped two global-scale organized concentric antipodal sets of arcuate troughs up to hundreds of kilometres long and 300 m to approximately 1.5 km deep. An excellent match to these features is found with stresses caused by an episode of approximately 80 degrees true polar wander. These depressions also appear to be geographically related to other large-scale bright and dark lineaments, suggesting that many of Europa's tectonic patterns may also be related to true polar wander.
A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images
Tang, Yunwei; Jing, Linhai; Ding, Haifeng
2017-01-01
The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416
Simultenious binary hash and features learning for image retrieval
NASA Astrophysics Data System (ADS)
Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.
2016-05-01
Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.
Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model
Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal
2016-01-01
In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769
Deformable templates guided discriminative models for robust 3D brain MRI segmentation.
Liu, Cheng-Yi; Iglesias, Juan Eugenio; Tu, Zhuowen
2013-10-01
Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems.
Landmark-based deep multi-instance learning for brain disease diagnosis.
Liu, Mingxia; Zhang, Jun; Adeli, Ehsan; Shen, Dinggang
2018-01-01
In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liang, Yu-Li
Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory detection rate by using facial features and skin color model. To harness all the features in the scene, we further developed another system using multiple types of local descriptors along with Bag-of-Visual Word framework. In addition, an investigation of new contour feature in detecting obscene content is presented.
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition
NASA Astrophysics Data System (ADS)
Hafizhelmi Kamaru Zaman, Fadhlan
2018-03-01
Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.
Yoo, Youngjin; Tang, Lisa Y W; Brosch, Tom; Li, David K B; Kolind, Shannon; Vavasour, Irene; Rauscher, Alexander; MacKay, Alex L; Traboulsee, Anthony; Tam, Roger C
2018-01-01
Myelin imaging is a form of quantitative magnetic resonance imaging (MRI) that measures myelin content and can potentially allow demyelinating diseases such as multiple sclerosis (MS) to be detected earlier. Although focal lesions are the most visible signs of MS pathology on conventional MRI, it has been shown that even tissues that appear normal may exhibit decreased myelin content as revealed by myelin-specific images (i.e., myelin maps). Current methods for analyzing myelin maps typically use global or regional mean myelin measurements to detect abnormalities, but ignore finer spatial patterns that may be characteristic of MS. In this paper, we present a machine learning method to automatically learn, from multimodal MR images, latent spatial features that can potentially improve the detection of MS pathology at early stage. More specifically, 3D image patches are extracted from myelin maps and the corresponding T1-weighted (T1w) MRIs, and are used to learn a latent joint myelin-T1w feature representation via unsupervised deep learning. Using a data set of images from MS patients and healthy controls, a common set of patches are selected via a voxel-wise t -test performed between the two groups. In each MS image, any patches overlapping with focal lesions are excluded, and a feature imputation method is used to fill in the missing values. A feature selection process (LASSO) is then utilized to construct a sparse representation. The resulting normal-appearing features are used to train a random forest classifier. Using the myelin and T1w images of 55 relapse-remitting MS patients and 44 healthy controls in an 11-fold cross-validation experiment, the proposed method achieved an average classification accuracy of 87.9% (SD = 8.4%), which is higher and more consistent across folds than those attained by regional mean myelin (73.7%, SD = 13.7%) and T1w measurements (66.7%, SD = 10.6%), or deep-learned features in either the myelin (83.8%, SD = 11.0%) or T1w (70.1%, SD = 13.6%) images alone, suggesting that the proposed method has strong potential for identifying image features that are more sensitive and specific to MS pathology in normal-appearing brain tissues.
GPS Imaging of Global Vertical Land Motion for Sea Level Studies
NASA Astrophysics Data System (ADS)
Hammond, W. C.; Blewitt, G.; Hamlington, B. D.
2015-12-01
Coastal vertical land motion contributes to the signal of local relative sea level change. Moreover, understanding global sea level change requires understanding local sea level rise at many locations around Earth. It is therefore essential to understand the regional secular vertical land motion attributable to mantle flow, tectonic deformation, glacial isostatic adjustment, postseismic viscoelastic relaxation, groundwater basin subsidence, elastic rebound from groundwater unloading or other processes that can change the geocentric height of tide gauges anchored to the land. These changes can affect inferences of global sea level rise and should be taken into account for global projections. We present new results of GPS imaging of vertical land motion across most of Earth's continents including its ice-free coastlines around North and South America, Europe, Australia, Japan, parts of Africa and Indonesia. These images are based on data from many independent open access globally distributed continuously recording GPS networks including over 13,500 stations. The data are processed in our system to obtain solutions aligned to the International Terrestrial Reference Frame (ITRF08). To generate images of vertical rate we apply the Median Interannual Difference Adjusted for Skewness (MIDAS) algorithm to the vertical times series to obtain robust non-parametric estimates with realistic uncertainties. We estimate the vertical land motion at the location of 1420 tide gauges locations using Delaunay-based geographic interpolation with an empirically derived distance weighting function and median spatial filtering. The resulting image is insensitive to outliers and steps in the GPS time series, omits short wavelength features attributable to unstable stations or unrepresentative rates, and emphasizes long-wavelength mantle-driven vertical rates.
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery.
Zhao, Yi; Ma, Jiale; Li, Xiaohui; Zhang, Jie
2018-02-27
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset 'UAV_Fire'. A 15-layered self-learning DCNN architecture named 'Fire_Net' is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, 'Fire_Net' guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified.
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
Zhao, Yi; Ma, Jiale; Li, Xiaohui
2018-01-01
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified. PMID:29495504
The tectonics of Titan: Global structural mapping from Cassini RADAR
Liu, Zac Yung-Chun; Radebaugh, Jani; Harris, Ron A.; Christiansen, Eric H.; Neish, Catherine D.; Kirk, Randolph L.; Lorenz, Ralph D.; ,
2016-01-01
The Cassini RADAR mapper has imaged elevated mountain ridge belts on Titan with a linear-to-arcuate morphology indicative of a tectonic origin. Systematic geomorphologic mapping of the ridges in Synthetic Aperture RADAR (SAR) images reveals that the orientation of ridges is globally E–W and the ridges are more common near the equator than the poles. Comparison with a global topographic map reveals the equatorial ridges are found to lie preferentially at higher-than-average elevations. We conclude the most reasonable formation scenario for Titan’s ridges is that contractional tectonism built the ridges and thickened the icy lithosphere near the equator, causing regional uplift. The combination of global and regional tectonic events, likely contractional in nature, followed by erosion, aeolian activity, and enhanced sedimentation at mid-to-high latitudes, would have led to regional infilling and perhaps covering of some mountain features, thus shaping Titan’s tectonic landforms and surface morphology into what we see today.
Toward a Global Bundle Adjustment of SPOT 5 - HRS Images
NASA Astrophysics Data System (ADS)
Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.
2012-07-01
The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the global block is unique it becomes easier to manage the historic and the different evolutions of the computations (new images, new GCPs or tie points). The location is now unique and consequently coherent all around the world, avoiding steps and artifacts on the borders of DSMs (Digital Surface Models) and OrthoImages historically calculated from different blocks. No extrapolation far from GCPs in the limits of images is done anymore. Using the global block as a reference will allow new images from other sources to be easily located on this reference.
NASA Astrophysics Data System (ADS)
Negahdar, Mohammadreza; Zacarias, Albert; Milam, Rebecca A.; Dunlap, Neal; Woo, Shiao Y.; Amini, Amir A.
2012-03-01
The treatment plan evaluation for lung cancer patients involves pre-treatment and post-treatment volume CT imaging of the lung. However, treatment of the tumor volume lung results in structural changes to the lung during the course of treatment. In order to register the pre-treatment volume to post-treatment volume, there is a need to find robust and homologous features which are not affected by the radiation treatment along with a smooth deformation field. Since airways are well-distributed in the entire lung, in this paper, we propose use of airway tree bifurcations for registration of the pre-treatment volume to the post-treatment volume. A dedicated and automated algorithm has been developed that finds corresponding airway bifurcations in both images. To derive the 3-D deformation field, a B-spline transformation model guided by mutual information similarity metric was used to guarantee the smoothness of the transformation while combining global information from bifurcation points. Therefore, the approach combines both global statistical intensity information with local image feature information. Since during normal breathing, the lung undergoes large nonlinear deformations, it is expected that the proposed method would also be applicable to large deformation registration between maximum inhale and maximum exhale images in the same subject. The method has been evaluated by registering 3-D CT volumes at maximum exhale data to all the other temporal volumes in the POPI-model data.
Lexical Processing in Toddlers with ASD: Does Weak Central Coherence Play a Role?
Ellis Weismer, Susan; Haebig, Eileen; Edwards, Jan; Saffran, Jenny; Venker, Courtney E
2016-12-01
This study investigated whether vocabulary delays in toddlers with autism spectrum disorders (ASD) can be explained by a cognitive style that prioritizes processing of detailed, local features of input over global contextual integration-as claimed by the weak central coherence (WCC) theory. Thirty toddlers with ASD and 30 younger, cognition-matched typical controls participated in a looking-while-listening task that assessed whether perceptual or semantic similarities among named images disrupted word recognition relative to a neutral condition. Overlap of perceptual features invited local processing whereas semantic overlap invited global processing. With the possible exception of a subset of toddlers who had very low vocabulary skills, these results provide no evidence that WCC is characteristic of lexical processing in toddlers with ASD.
Registration and Fusion of Multiple Source Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline
2004-01-01
Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.
Diagnostic features of Alzheimer's disease extracted from PET sinograms
NASA Astrophysics Data System (ADS)
Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.
2002-01-01
Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.
HOTEX: An Approach for Global Mapping of Human Built-Up and Settlement Extent
NASA Technical Reports Server (NTRS)
Wang, Panshi; Huang, Chengquan; Tilton, James C.; Tan, Bin; Brown De Colstoun, Eric C.
2017-01-01
Understanding the impacts of urbanization requires accurate and updatable urban extent maps. Here we present an algorithm for mapping urban extent at global scale using Landsat data. An innovative hierarchical object-based texture (HOTex) classification approach was designed to overcome spectral confusion between urban and nonurban land cover types. VIIRS nightlights data and MODIS vegetation index datasets are integrated as high-level features under an object-based framework. We applied the HOTex method to the GLS-2010 Landsat images to produce a global map of human built-up and settlement extent. As shown by visual assessments, our method could effectively map urban extent and generate consistent results using images with inconsistent acquisition time and vegetation phenology. Using scene-level cross validation for results in Europe, we assessed the performance of HOTex and achieved a kappa coefficient of 0.91, compared to 0.74 from a baseline method using per-pixel classification using spectral information.
Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio
2018-02-01
Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
2004-01-01
15 May 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows the results of a small landslide off of a hillslope in the Aureum Chaos region of Mars. Mass movement occurred from right (the slope) to left (the lobate feature pointed left). Small dark dots in the landslide area are large boulders. This feature is located near 2.6oS, 24.5oW. This picture covers an area approximately 3 km (1.9 mi) across and is illuminated by sunlight from the left/upper left.NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin
2016-03-01
Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.
Treelets Binary Feature Retrieval for Fast Keypoint Recognition.
Zhu, Jianke; Wu, Chenxia; Chen, Chun; Cai, Deng
2015-10-01
Fast keypoint recognition is essential to many vision tasks. In contrast to the classification-based approaches, we directly formulate the keypoint recognition as an image patch retrieval problem, which enjoys the merit of finding the matched keypoint and its pose simultaneously. To effectively extract the binary features from each patch surrounding the keypoint, we make use of treelets transform that can group the highly correlated data together and reduce the noise through the local analysis. Treelets is a multiresolution analysis tool, which provides an orthogonal basis to reflect the geometry of the noise-free data. To facilitate the real-world applications, we have proposed two novel approaches. One is the convolutional treelets that capture the image patch information locally and globally while reducing the computational cost. The other is the higher-order treelets that reflect the relationship between the rows and columns within image patch. An efficient sub-signature-based locality sensitive hashing scheme is employed for fast approximate nearest neighbor search in patch retrieval. Experimental evaluations on both synthetic data and the real-world Oxford dataset have shown that our proposed treelets binary feature retrieval methods outperform the state-of-the-art feature descriptors and classification-based approaches.
Zhou, Yongxia; Yu, Fang; Duong, Timothy
2014-01-01
This study employed graph theory and machine learning analysis of multiparametric MRI data to improve characterization and prediction in autism spectrum disorders (ASD). Data from 127 children with ASD (13.5±6.0 years) and 153 age- and gender-matched typically developing children (14.5±5.7 years) were selected from the multi-center Functional Connectome Project. Regional gray matter volume and cortical thickness increased, whereas white matter volume decreased in ASD compared to controls. Small-world network analysis of quantitative MRI data demonstrated decreased global efficiency based on gray matter cortical thickness but not with functional connectivity MRI (fcMRI) or volumetry. An integrative model of 22 quantitative imaging features was used for classification and prediction of phenotypic features that included the autism diagnostic observation schedule, the revised autism diagnostic interview, and intelligence quotient scores. Among the 22 imaging features, four (caudate volume, caudate-cortical functional connectivity and inferior frontal gyrus functional connectivity) were found to be highly informative, markedly improving classification and prediction accuracy when compared with the single imaging features. This approach could potentially serve as a biomarker in prognosis, diagnosis, and monitoring disease progression.
Brain tumor classification and segmentation using sparse coding and dictionary learning.
Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo
2016-08-01
This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
Extensions of algebraic image operators: An approach to model-based vision
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morelli, Michael V.
1990-01-01
Researchers extend their previous research on a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets. Addition and multiplication are defined for the set of all grey-level images, which can then be described as polynomials of two variables. Utilizing this new algebraic structure, researchers devised an innovative, efficient edge detection scheme. An accurate method for deriving gradient component information from this edge detector is presented. Based upon this new edge detection system researchers developed a robust method for linear feature extraction by combining the techniques of a Hough transform and a line follower. The major advantage of this feature extractor is its general, object-independent nature. Target attributes, such as line segment lengths, intersections, angles of intersection, and endpoints are derived by the feature extraction algorithm and employed during model matching. The algebraic operators are global operations which are easily reconfigured to operate on any size or shape region. This provides a natural platform from which to pursue dynamic scene analysis. A method for optimizing the linear feature extractor which capitalizes on the spatially reconfiguration nature of the edge detector/gradient component operator is discussed.
Evolution of regional to global paddy rice mapping methods
NASA Astrophysics Data System (ADS)
Dong, J.; Xiao, X.
2016-12-01
Paddy rice agriculture plays an important role in various environmental issues including food security, water use, climate change, and disease transmission. However, regional and global paddy rice maps are surprisingly scarce and sporadic despite numerous efforts in paddy rice mapping algorithms and applications. In this presentation we would like to review the existing paddy rice mapping methods from the literatures ranging from the 1980s to 2015. In particular, we illustrated the evolution of these paddy rice mapping efforts, looking specifically at the future trajectory of paddy rice mapping methodologies. The biophysical features and growth phases of paddy rice were analyzed first, and feature selections for paddy rice mapping were analyzed from spectral, polarimetric, temporal, spatial, and textural aspects. We sorted out paddy rice mapping algorithms into four categories: 1) Reflectance data and image statistic-based approaches, 2) vegetation index (VI) data and enhanced image statistic-based approaches, 3) VI or RADAR backscatter-based temporal analysis approaches, and 4) phenology-based approaches through remote sensing recognition of key growth phases. The phenology-based approaches using unique features of paddy rice (e.g., transplanting) for mapping have been increasingly used in paddy rice mapping. Based on the literature review, we discussed a series of issues for large scale operational paddy rice mapping.
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
Global detection of large lunar craters based on the CE-1 digital elevation model
NASA Astrophysics Data System (ADS)
Luo, Lei; Mu, Lingli; Wang, Xinyuan; Li, Chao; Ji, Wei; Zhao, Jinjin; Cai, Heng
2013-12-01
Craters, one of the most significant features of the lunar surface, have been widely researched because they offer us the relative age of the surface unit as well as crucial geological information. Research on crater detection algorithms (CDAs) of the Moon and other planetary bodies has concentrated on detecting them from imagery data, but the computational cost of detecting large craters using images makes these CDAs impractical. This paper presents a new approach to crater detection that utilizes a digital elevation model instead of images; this enables fully automatic global detection of large craters. Craters were delineated by terrain attributes, and then thresholding maps of terrain attributes were used to transform topographic data into a binary image, finally craters were detected by using the Hough Transform from the binary image. By using the proposed algorithm, we produced a catalog of all craters ⩾10 km in diameter on the lunar surface and analyzed their distribution and population characteristics.
Mercer, David
2018-02-01
A notable feature in the public framing of debates involving the science of Anthropogenic Global Warming are appeals to uncritical 'positivist' images of the ideal scientific method. Versions of Sir Karl Popper's philosophy of falsification appear most frequently, featuring in many Web sites and broader media. This use of pop philosophy of science forms part of strategies used by critics, mainly from conservative political backgrounds, to manufacture doubt, by setting unrealistic standards for sound science, in the veracity of science of Anthropogenic Global Warming. It will be shown, nevertheless, that prominent supporters of Anthropogenic Global Warming science also often use similar references to Popper to support their claims. It will also be suggested that this pattern reflects longer traditions of the use of Popperian philosophy of science in controversial settings, particularly in the United States, where appeals to the authority of science to legitimize policy have been most common. It will be concluded that studies of the science of Anthropogenic Global Warming debate would benefit from taking greater interest in questions raised by un-reflexive and politically expedient public understanding(s) of the philosophy of science of both critics and supporters of the science of Anthropogenic Global Warming.
Hargrave, Catriona; Deegan, Timothy; Poulsen, Michael; Bednarz, Tomasz; Harden, Fiona; Mengersen, Kerrie
2018-05-17
To develop a method for scoring online cone-beam CT (CBCT)-to-planning CT image feature alignment to inform prostate image-guided radiotherapy (IGRT) decision-making. The feasibility of incorporating volume variation metric thresholds predictive of delivering planned dose into weighted functions, was investigated. Radiation therapists and radiation oncologists participated in workshops where they reviewed prostate CBCT-IGRT case examples and completed a paper-based survey of image feature matching practices. For 36 prostate cancer patients, one daily CBCT was retrospectively contoured then registered with their plan to simulate delivered dose if (a) no online setup corrections and (b) online image alignment and setup corrections, were performed. Survey results were used to select variables for inclusion in classification and regression tree (CART) and boosted regression trees (BRT) modeling of volume variation metric thresholds predictive of delivering planned dose to the prostate, proximal seminal vesicles (PSV), bladder, and rectum. Weighted functions incorporating the CART and BRT results were used to calculate a score of individual tumor and organ at risk image feature alignment (FAS TV _ OAR ). Scaled and weighted FAS TV _ OAR were then used to calculate a score of overall treatment compliance (FAS global ) for a given CBCT-planning CT registration. The FAS TV _ OAR were assessed for sensitivity, specificity, and predictive power. FAS global thresholds indicative of high, medium, or low overall treatment plan compliance were determined using coefficients from multiple linear regression analysis. Thirty-two participants completed the prostate CBCT-IGRT survey. While responses demonstrated consensus of practice for preferential ranking of planning CT and CBCT match features in the presence of deformation and rotation, variation existed in the specified thresholds for observed volume differences requiring patient repositioning or repeat bladder and bowel preparation. The CART and BRT modeling indicated that for a given registration, a Dice similarity coefficient >0.80 and >0.60 for the prostate and PSV, respectively, and a maximum Hausdorff distance <8.0 mm for both structures were predictive of delivered dose ± 5% of planned dose. A normalized volume difference <1.0 and a CBCT anterior rectum wall >1.0 mm anterior to the planning CT anterior rectum wall were predictive of delivered dose >5% of planned rectum dose. A normalized volume difference <0.88, and a CBCT bladder wall >13.5 mm inferior and >5.0 mm posterior to the planning CT bladder were predictive of delivered dose >5% of planned bladder dose. A FAS TV _ OAR >0 is indicative of delivery of planned dose. For calculated FAS TV _ OAR for the prostate, PSV, bladder, and rectum using test data, sensitivity was 0.56, 0.75, 0.89, and 1.00, respectively; specificity 0.90, 0.94, 0.59, and 1.00, respectively; positive predictive power 0.90, 0.86, 0.53, and 1.00, respectively; and negative predictive power 0.56, 0.89, 0.91, and 1.00, respectively. Thresholds for the calculated FAS global of were low <60, medium 60-80, and high >80, with a 27% misclassification rate for the test data. A FAS global incorporating nested FAS TV _ OAR and volume variation metric thresholds predictive of treatment plan compliance was developed, offering an alternative to pretreatment dose calculations to assess treatment delivery accuracy. © 2018 American Association of Physicists in Medicine.
Flow Ejecta and Slope Landslides in Small Crater - High Resolution Image
NASA Technical Reports Server (NTRS)
1998-01-01
This high resolution picture of a moderately small impact crater on Mars was taken by the Mars Global Surveyor Orbiter Camera (MOC) on October 17, 1997 at 4:11:07 PM PST, during MGS orbit 22. The image covers an area 2.9 by 48.4 kilometers (1.8 by 30 miles) at 9.6 m (31.5 feet) per picture element, and is centered at 21.3 degrees N, 179.8 degrees W, near Orcus Patera. The MOC image is a factor of 15X better than pervious Viking views of this particular crater.
The unnamed crater is one of three closely adjacent impact features that display the ejecta pattern characteristic of one type of 'flow-ejecta' crater. Such patterns are considered evidence of fluidized movement of the materials ejected during the cratering event, and are believed to indicate the presence of subsurface ice or liquid water.Long, linear features of different brightness values can be seen on the on the steep slopes inside and outside the crater rim. This type of feature, first identified in Viking Orbiter images acquired over 20 years ago, are more clearly seen in this new view (about 3 times better than the best previous observations). Their most likely explanation is that small land or dirt slides, initiated by seismic or wind action, have flowed down the steep slopes. Initially dark because of the nature of the surface disturbance, these features get lighter with time as the ubiquitous fine, bright dust settles onto them from the martian atmosphere. Based on estimates of the dust fall-out rate, many of these features are probably only a few tens to hundreds of years old. Thus, they are evidence of a process that is active on Mars today.Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Flow Ejecta and Slope Landslides in Small Crater
NASA Technical Reports Server (NTRS)
1998-01-01
This high resolution picture of a moderately small impact crater on Mars was taken by the Mars Global Surveyor Orbiter Camera (MOC) on October 17, 1997 at 4:11:07 PM PST, during MGS orbit 22. The image covers an area 2.9 by 48.4 kilometers (1.8 by 30 miles) at 9.6 m (31.5 feet) per picture element, and is centered at 21.3 degrees N, 179.8 degrees W, near Orcus Patera. The MOC image is a factor of 15X better than pervious Viking views of this particular crater (left, Viking image 545A49).
The unnamed crater is one of three closely adjacent impact features that display the ejecta pattern characteristic of one type of 'flow-ejecta' crater. Such patterns are considered evidence of fluidized movement of the materials ejected during the cratering event, and are believed to indicate the presence of subsurface ice or liquid water.Long, linear features of different brightness values can be seen on the on the steep slopes inside and outside the crater rim. This type of feature, first identified in Viking Orbiter images acquired over 20 years ago, are more clearly seen in this new view (about 3 times better than the best previous observations). Their most likely explanation is that small land or dirt slides, initiated by seismic or wind action, have flowed down the steep slopes. Initially dark because of the nature of the surface disturbance, these features get lighter with time as the ubiquitous fine, bright dust settles onto them from the martian atmosphere. Based on estimates of the dust fall-out rate, many of these features are probably only a few tens to hundreds of years old. Thus, they are evidence of a process that is active on Mars today.Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Fan, Jianping; Gao, Yuli; Luo, Hangzai
2008-03-01
In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.
Studying the Surfaces of the Icy Galilean Satellites With JIMO
NASA Astrophysics Data System (ADS)
Prockter, L.; Schenk, P.; Pappalardo, R.
2003-12-01
The Geology subgroup of the Jupiter Icy Moons Orbiter (JIMO) Science Definition Team (SDT) has been working with colleagues within the planetary science community to determine the key outstanding science goals that could be met by the JIMO mission. Geological studies of the Galilean satellites will benefit from the spacecraft's long orbital periods around each satellite, lasting from one to several months. This mission plan allows us to select the optimal viewing conditions to complete global compositional and morphologic mapping at high resolution, and to target geologic features of key scientific interest at very high resolution. Community input to this planning process suggests two major science objectives, along with corresponding measurements proposed to meet them. Objective 1: Determine the origins of surface features and their implications for geological history and evolution. This encompasses investigations of magmatism (intrusion, extrusion, and diapirism), tectonism (isostatic compensation, and styles of faulting, flexure and folding), impact cratering (morphology and distribution), and gradation (erosion and deposition) processes (impact gardening, sputtering, mass wasting and frosts). Suggested measurements to meet this goal include (1) two dimensional global topographic mapping sufficient to discriminate features at a spatial scale of 10 m, and with better than or equal to 1 m relative vertical accuracy, (2) nested images of selected target areas at a range of resolutions down to the submeter pixel scale, (3) global (albedo) mapping at better than or equal to 10 m/pixel, and (4) multispectral global mapping in at least 3 colors at better than or equal to 100 m/pixel, with some subsets at better than 30 m/pixel. Objective 2. Identify and characterize potential landing sites for future missions. A primary component to the success of future landed missions is full characterization of potential sites in terms of their relative age, geological interest, and engineering safety. Measurement requirements suggested to meet this goal (in addition to the requirements of Objective 1) include the acquisition of super-high resolution images of selected target areas (with intermediate context imaging) down to 25 cm/pixel scale. The Geology subgroup passed these recommendations to the full JIMO Science Definition Team, to be incorporated into the final science recommendations for the JIMO mission.
Robust feature matching via support-line voting and affine-invariant ratios
NASA Astrophysics Data System (ADS)
Li, Jiayuan; Hu, Qingwu; Ai, Mingyao; Zhong, Ruofei
2017-10-01
Robust image matching is crucial for many applications of remote sensing and photogrammetry, such as image fusion, image registration, and change detection. In this paper, we propose a robust feature matching method based on support-line voting and affine-invariant ratios. We first use popular feature matching algorithms, such as SIFT, to obtain a set of initial matches. A support-line descriptor based on multiple adaptive binning gradient histograms is subsequently applied in the support-line voting stage to filter outliers. In addition, we use affine-invariant ratios computed by a two-line structure to refine the matching results and estimate the local affine transformation. The local affine model is more robust to distortions caused by elevation differences than the global affine transformation, especially for high-resolution remote sensing images and UAV images. Thus, the proposed method is suitable for both rigid and non-rigid image matching problems. Finally, we extract as many high-precision correspondences as possible based on the local affine extension and build a grid-wise affine model for remote sensing image registration. We compare the proposed method with six state-of-the-art algorithms on several data sets and show that our method significantly outperforms the other methods. The proposed method achieves 94.46% average precision on 15 challenging remote sensing image pairs, while the second-best method, RANSAC, only achieves 70.3%. In addition, the number of detected correct matches of the proposed method is approximately four times the number of initial SIFT matches.
Experimental study of canvas characterization for paintings
NASA Astrophysics Data System (ADS)
Cornelis, Bruno; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter
2010-02-01
The work described here fits in the context of a larger project on the objective and relevant characterization of paintings and painting canvas through the analysis of multimodal digital images. We captured, amongst others, X-ray images of different canvas types, characterized by a variety of textures and weave patterns (fine and rougher texture; single thread and multiple threads per weave), including raw canvas as well as canvas processed with different primers. In this paper, we study how to characterize the canvas by extracting global features such as average thread width, average distance between successive threads (i.e. thread density) and the spatial distribution of primers. These features are then used to construct a generic model of the canvas structure. Secondly, we investigate whether we can identify different pieces of canvas coming from the same bolt. This is an important element for dating, authentication and identification of restorations. Both the global characteristics mentioned earlier and some local properties (such as deviations from the average pattern model) are used to compare the "fingerprint" of different pieces of cloth coming from the same or different bolts.
NASA Technical Reports Server (NTRS)
2006-01-01
21 July 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a small portion of the floor of Kaiser Crater in the Noachis Terra region, Mars. The terrain in the upper (northern) half of the image is covered by large windblown ripples and a few smoother-surfaced sand dunes. The dominant winds responsible for these features blew from the west/southwest (left/lower left). Location near: 47.2oS, 341.3oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern WinterSmall domes on Venus: Probable analogs of Icelandic lava shields
Garvin, James B.; Williams, Richard S.
1990-01-01
On the basis of observed shapes and volumetric estimates, we interpret small, dome-like features on radar images of Venus to be analogs of Icelandic lava-shield volcanoes. Using morphometric data for venusian domes in Aubele and Slyuta (in press), as well as our own measurements of representative dome volumes and areas from Tethus Regio, we demonstrate that the characteristic aspect ratios and flank slopes of these features are consistent with a subclass of low Icelandic lava-shield volcanoes (LILS ). LILS are slightly convex in cross-section with typical flank slopes of ∼3°. Plausible lava-shield-production rates for the venusian plains suggest formation of ∼53 million shields over the past 0.25 Ga. The cumulative global volume of lava that would be associated with this predicted number of lava shields is only a factor of 3–4 times that of a single oceanic composite shield volcano such as Mauna Loa. The global volume of all venusian lava shields in the 0.5–20-km size range would only contribute a meter of resurfacing over geologically significant time scales. Thus, venusian analogs to LILS may represent the most abundant landform on the globally dominant plains of Venus, but would be insignificant with regard to the global volume of lava extruded. As in Iceland, associated lavas from fissure eruptions probably dominate plains volcanism and should be evident on the higher resolution Magellan radar images.
NASA Technical Reports Server (NTRS)
2004-01-01
12 November 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows light-toned, sedimentary rock outcrops in the Aureum Chaos region of Mars. On the brightest and steepest slope in this scene, dry talus shed from the outcrop has formed a series of dark fans along its base. These outcrops are located near 3.4oS, 27.5oW. The image covers an area approximately 3 km (1.9 mi) across and sunlight illuminates the scene from the upper left.NASA Technical Reports Server (NTRS)
2005-01-01
17 July 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows channels carved by catastrophic floods in the Tharsis region of Mars. This area is located northwest of the volcano, Jovis Tholus, and east of the large martian volcano, Olympus Mons. The terrain is presently mantled with fine dust. Location near: 20.8oN, 118.8oW Image width: width: 3 km (1.9 mi) Illumination from: lower left Season: Northern AutumnNASA Technical Reports Server (NTRS)
2006-01-01
27 April 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows an array of gullies in the north-northwest wall of a crater in Terra Cimmeria. These features may have been formed through the interaction of several processes including, but not limited to, mass wasting and/or seepage and runoff of groundwater. Location near: 33.5oS, 207.2oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern SummerNASA Technical Reports Server (NTRS)
2006-01-01
2 June 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows material on the floor of a crater in Noachis Terra, west of Hellas Planitia. Windblown features, both the large, dark-toned sand dunes and smaller, light-toned ripples, obscure and perhaps, protect portions of the crater floor from further modification by erosional processes. Location near: 45.4oS, 331.2oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern SummerNASA Technical Reports Server (NTRS)
2004-01-01
1 February 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows large windblown ripples (or, some might say, small dunes) in troughs between mesas of the Tempe Mensa region. The ripples are generally perpendicular to the trough walls, indicating that [missing text] the features blew through these canyons. The image is located near 33.5oN, 69.2oW. The picture covers an area 3 km (1.9 mi) wide; sunlight illuminates the scene from the lower left.
NASA Astrophysics Data System (ADS)
Huang, Fuqing; Lei, Jiuhou; Dou, Xiankang; Luan, Xiaoli; Zhong, Jiahao
2018-01-01
In this study, coordinated airglow imager, GPS total electron content (TEC), and Beidou geostationary orbit (GEO) TEC observations for the first time are used to investigate the characteristics of nighttime medium-scale traveling ionospheric disturbances (MSTIDs) over central China. The results indicated that the features of nighttime MSTIDs from three types of observations are generally consistent, whereas the nighttime MSTID features from the Beidou GEO TEC are in better agreement with those from airglow images as compared with the GPS TEC, given that the nighttime MSTID characteristics from GPS TEC are significantly affected by Doppler effect due to satellite movement. It is also found that there are three peaks in the seasonal variations of the occurrence rate of nighttime MSTIDs in 2016. Our study revealed that the Beidou GEO satellites provided fidelity TEC observations to study the ionospheric variability.
NASA Technical Reports Server (NTRS)
Phillips, M. S.; Moersch, J. E.; Cabrol, N. A.; Davila, A. F.
2018-01-01
The guiding theme of Mars exploration is shifting from global and regional habitability assessment to biosignature detection. To locate features likely to contain biosignatures, it is useful to focus on the reliable identification of specific habitats with high biosignature preservation potential. Proposed chloride deposits on Mars may represent evaporitic environments conducive to the preservation of biosignatures. Analogous chloride- bearing, salt-encrusted playas (salars) are a habitat for life in the driest parts of the Atacama Desert, and are also environments with a taphonomic window. The specific geologic features that harbor and preserve microorganisms in Atacama salars are sub- meter to meter scale salt protuberances, or halite nodules. This study focuses on the ability to recognize and map halite nodules using images acquired from an unmanned aerial vehicle (UAV) at spatial resolutions ranging from mm/pixel to that of the highest resolution orbital images available for Mars.
Transient bright "halos" on the South Polar Residual Cap of Mars: Implications for mass-balance
NASA Astrophysics Data System (ADS)
Becerra, Patricio; Byrne, Shane; Brown, Adrian J.
2015-05-01
Spacecraft imaging of Mars' south polar region during mid-southern summer of Mars year 28 (2007) observed bright halo-like features surrounding many of the pits, scarps and slopes of the heavily eroded carbon dioxide ice of the South Polar Residual Cap (SPRC). These features had not been observed before, and have not been observed since. We report on the results of an observational study of these halos, and spectral modeling of the SPRC surface at the time of their appearance. Image analysis was performed using data from MRO's Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), as well as images from Mars Global Surveyor's (MGS) Mars Orbiter Camera (MOC). Data from MRO's Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) were used for the spectral analysis of the SPRC ice at the time of the halos. These data were compared with a Hapke reflectance model of the surface to constrain their formation mechanism. We find that the unique appearance of the halos is intimately linked to a near-perihelion global dust storm that occurred shortly before they were observed. The combination of vigorous summertime sublimation of carbon dioxide ice from sloped surfaces on the SPRC and simultaneous settling of dust from the global storm, resulted in a sublimation wind that deflected settling dust particles away from the edges of these slopes, keeping these areas relatively free of dust compared to the rest of the cap. The fact that the halos were not exhumed in subsequent years indicates a positive mass-balance for flat portions of the SPRC in those years. A net accumulation mass-balance on flat surfaces of the SPRC is required to preserve the cap, as it is constantly being eroded by the expansion of the pits and scarps that populate its surface.
The essential role of amateur astronomers in enabling the Juno mission interaction with the public
NASA Astrophysics Data System (ADS)
Orton, G. S.; Hansen, C. J.; Tabataba-Vakili, F.; Bolton, S.; Jensen, E.
2017-09-01
JunoCam was added to the payload of the Juno mission largely to function in the role of education and public outreach. For the first time, the public is able to engage in the discussion and choice of targets for a major NASA mission. The discussion about which features to image is enabled by a bi-weekly updated map of Jupiter's cloud system, thereby engaging the community of amateur astronomers as a vast network of co-investigators, whose products stimulate conversation and global public awareness of Jupiter and Juno's investigative role. The contributed images provide the focus for ongoing discussion about various planetary features over a long time frame. Approximately two weeks before Juno's closest approach to Jupiter on each orbit, the atmospheric features that have been under discussion and are available to JunoCam on that perijove are nominated for voting, and the public at large votes on what to image at low latitudes, with the camera always taking images of the poles in each perijove. Public voting was tested for the first time on three regions for PJ3 and has continued since then for nearly all non-polar images. The results of public processing of JunoCam images range all the way from artistic renditions up to professional-equivalent analysis. All aspects of this effort are available on: https://www.missionjuno.swri.edu/junocam/.
Jiang, Jun; Wu, Yao; Huang, Meiyan; Yang, Wei; Chen, Wufan; Feng, Qianjin
2013-01-01
Brain tumor segmentation is a clinical requirement for brain tumor diagnosis and radiotherapy planning. Automating this process is a challenging task due to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this paper, we propose a method to construct a graph by learning the population- and patient-specific feature sets of multimodal magnetic resonance (MR) images and by utilizing the graph-cut to achieve a final segmentation. The probabilities of each pixel that belongs to the foreground (tumor) and the background are estimated by global and custom classifiers that are trained through learning population- and patient-specific feature sets, respectively. The proposed method is evaluated using 23 glioma image sequences, and the segmentation results are compared with other approaches. The encouraging evaluation results obtained, i.e., DSC (84.5%), Jaccard (74.1%), sensitivity (87.2%), and specificity (83.1%), show that the proposed method can effectively make use of both population- and patient-specific information. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.
Zhao, Liya; Jia, Kebin
2016-01-01
Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.
Lexical Processing in Toddlers with ASD: Does Weak Central Coherence Play a Role?
Weismer, Susan Ellis; Haebig, Eileen; Edwards, Jan; Saffran, Jenny; Venker, Courtney E.
2016-01-01
This study investigated whether vocabulary delays in toddlers with autism spectrum disorders (ASD) can be explained by a cognitive style that prioritizes processing of detailed, local features of input over global contextual integration – as claimed by the weak central coherence (WCC) theory. Thirty toddlers with ASD and 30 younger, cognition-matched typical controls participated in a looking-while-listening task that assessed whether perceptual or semantic similarities among named images disrupted word recognition relative to a neutral condition. Overlap of perceptual features invited local processing whereas semantic overlap invited global processing. With the possible exception of a subset of toddlers who had very low vocabulary skills, these results provide no evidence that WCC is characteristic of lexical processing in toddlers with ASD. PMID:27696177
Multi-source remotely sensed data fusion for improving land cover classification
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Bo; Xu, Bing
2017-02-01
Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.
NASA Astrophysics Data System (ADS)
Dennison, P. E.; Kokaly, R. F.; Daughtry, C. S. T.; Roberts, D. A.; Thompson, D. R.; Chambers, J. Q.; Nagler, P. L.; Okin, G. S.; Scarth, P.
2016-12-01
Terrestrial vegetation is dynamic, expressing seasonal, annual, and long-term changes in response to climate and disturbance. Phenology and disturbance (e.g. drought, insect attack, and wildfire) can result in a transition from photosynthesizing "green" vegetation to non-photosynthetic vegetation (NPV). NPV cover can include dead and senescent vegetation, plant litter, agricultural residues, and non-photosynthesizing stem tissue. NPV cover is poorly captured by conventional remote sensing vegetation indices, but it is readily separable from substrate cover based on spectral absorption features in the shortwave infrared. We will present past research motivating the need for global NPV measurements, establishing that mapping seasonal NPV cover is critical for improving our understanding of ecosystem function and carbon dynamics. We will also present new research that helps determine a best achievable accuracy for NPV cover estimation. To test the sensitivity of different NPV cover estimation methods, we simulated satellite imaging spectrometer data using field spectra collected over mixtures of NPV, green vegetation, and soil substrate. We incorporated atmospheric transmittance and modeled sensor noise to create simulated spectra with spectral resolutions ranging from 10 to 30 nm. We applied multiple methods of NPV estimation to the simulated spectra, including spectral indices, spectral feature analysis, multiple endmember spectral mixture analysis, and partial least squares regression, and compared the accuracy and bias of each method. These results prescribe sensor characteristics for an imaging spectrometer mission with NPV measurement capabilities, as well as a "Quantified Earth Science Objective" for global measurement of NPV cover. Copyright 2016, all rights reserved.
Research and implementation of finger-vein recognition algorithm
NASA Astrophysics Data System (ADS)
Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin
2017-06-01
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images
Osadebey, Michael; Pedersen, Marius; Arnold, Douglas; Wendel-Mitoraj, Katrina
2017-01-01
Abstract. We describe a postacquisition, attribute-based quality assessment method for brain magnetic resonance imaging (MRI) images. It is based on the application of Bayes theory to the relationship between entropy and image quality attributes. The entropy feature image of a slice is segmented into low- and high-entropy regions. For each entropy region, there are three separate observations of contrast, standard deviation, and sharpness quality attributes. A quality index for a quality attribute is the posterior probability of an entropy region given any corresponding region in a feature image where quality attribute is observed. Prior belief in each entropy region is determined from normalized total clique potential (TCP) energy of the slice. For TCP below the predefined threshold, the prior probability for a region is determined by deviation of its percentage composition in the slice from a standard normal distribution built from 250 MRI volume data provided by Alzheimer’s Disease Neuroimaging Initiative. For TCP above the threshold, the prior is computed using a mathematical model that describes the TCP–noise level relationship in brain MRI images. Our proposed method assesses the image quality of each entropy region and the global image. Experimental results demonstrate good correlation with subjective opinions of radiologists for different types and levels of quality distortions. PMID:28630885
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Mars Digital Image Mosaic Globe
NASA Technical Reports Server (NTRS)
2000-01-01
The photomosaic that forms the base for this globe was created by merging two global digital image models (DIM's) of Mars-a medium-resolution monochrome mosaic processed to emphasize topographic features and a lower resolution color mosaic emphasizing color and albedo variations.
The medium-resolution (1/256 or roughly 231 m/pixel) monochromatic image model was constructed from about 6,000 images having resolutions of 150-350 m/pixel and oblique illumination (Sun 20 o -45 o above the horizon). Radiometric processing was intended to suppress or remove the effects of albedo variations through the use of a high-pass divide filter, followed by photometric normalization so that the contrast of a given topographic slope would be approximately the same in all images.The global color mosaic was assembled at 1/64 or roughly 864 m/pixel from about 1,000 red- and green-filter images having 500-1,000 m/pixel resolution. These images were first mosaiced in groups, each taken on a single orbit of the Viking spacecraft. The orbit mosaics were then processed to remove spatially and temporally varying atmospheric haze in the overlap regions. After haze removal, the per-orbit mosaics were photometrically normalized to equalize the contrast of albedo features and mosaiced together with cosmetic seam removal. The medium-resolution DIM was used for geometric control of this color mosaic. A green-filter image was synthesized by weighted averaging of the red- and violet-filter mosaics. Finally, the product seen here was obtained by multiplying each color image by the medium-resolution monochrome image. The color balance selected for images in this map series was designed to be close to natural color for brighter, redder regions, such as Arabia Terra and the Tharsis region, but the data have been stretched so that the relatively dark regions appear darker and less red than they actually are.The images are presented in a projection that portrays the entire surface of Mars in a manner suitable for the production of a globe; the number, size, and placement of text annotations were chosen for a 12-inch globe. Prominent features are labeled with names approved by the International Astronomical Union. A specialized program was used to create the 'flower petal' appearance of the images; the area of each petal from 0 to 75 degrees latitude is in the Transverse Mercator projection, and the area from 75 to 90 degrees latitude is in the Lambert Azimuthal Equal-Area projection. The northern hemisphere of Mars is shown on the left, and the southern hemisphere on the right.Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.
Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2016-03-01
Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.
NASA Technical Reports Server (NTRS)
Sakimoto, S. E. H.; Gregg, T. K. P.; Hughes, S. S.; Chadwick, J.
2003-01-01
Prior to the Mars Global Surveyor (MGS) and Mars Odyssey (MO) missions, The Syria Planum region of Mars was noted for several clusters of small (5-100 km) shield volcanoes and collapse craters, long tube and fissure-fed lava flows, and possible volcanic vents that were thought to be nearly contemporaneous with the volcanism in the Tempe- Mareotis province, which has long been known for volcanic shields and vents analogous to those of the Eastern Snake River Plains (ESRP) in Idaho. Recent MGS-based work on regional and global populations of martian small shields has revealed significant global trends in edifice attributes that are well-explained by eruption models with latitudinal variations in subsurface water/ice abundance, consistent with recent MO evidence for significant amounts of subsurface water that varies in latitude abundance s, and topographic and morphologic evidence for more geologically recent lava-ice relationships. However, while the global trends in small volcano data can be at least partially explained by volatile interactions with volcanism, some global and regional characteristics appear to be perhaps better explained by possible compositional, crystallinity or eruption style variations. This study expands the sampling of shields done in martian initial global studies for the Syria Planum and Tempe-Mareotis regions, which display a newly visible breadth and number of features in image and topography data. We compare these features to a similar range of features visible in the ESRP where both compositional and eruption style variations can quantitatively be shown to contribute to morphologic and topographic differences.
NASA Astrophysics Data System (ADS)
Janesick, James; Elliott, Tom; Andrews, James; Tower, John; Bell, Perry; Teruya, Alan; Kimbrough, Joe; Bishop, Jeanne
2014-09-01
Our paper will describe a recently designed Mk x Nk x 10 um pixel CMOS gated imager intended to be first employed at the LLNL National Ignition Facility (NIF). Fabrication involves stitching MxN 1024x1024x10 um pixel blocks together into a monolithic imager (where M = 1, 2, . .10 and N = 1, 2, . . 10). The imager has been designed for either NMOS or PMOS pixel fabrication using a base 0.18 um/3.3V CMOS process. Details behind the design are discussed with emphasis on a custom global reset feature which erases the imager of unwanted charge in ~1 us during the fusion ignition process followed by an exposure to obtain useful data. Performance data generated by prototype imagers designed similar to the Mk x Nk sensor is presented.
2012-08-01
dominates the global market for idealized media images. The world’s largest film industry today is actu- ally India’s Bollywood, and exports of Bol- lywood...Nigeria’s “Nollywood” is the world’s second most productive film industry .26 Like Bollywood films, Nollywood movies feature love sto- ries
Age and gender classification in the wild with unsupervised feature learning
NASA Astrophysics Data System (ADS)
Wan, Lihong; Huo, Hong; Fang, Tao
2017-03-01
Inspired by unsupervised feature learning (UFL) within the self-taught learning framework, we propose a method based on UFL, convolution representation, and part-based dimensionality reduction to handle facial age and gender classification, which are two challenging problems under unconstrained circumstances. First, UFL is introduced to learn selective receptive fields (filters) automatically by applying whitening transformation and spherical k-means on random patches collected from unlabeled data. The learning process is fast and has no hyperparameters to tune. Then, the input image is convolved with these filters to obtain filtering responses on which local contrast normalization is applied. Average pooling and feature concatenation are then used to form global face representation. Finally, linear discriminant analysis with part-based strategy is presented to reduce the dimensions of the global representation and to improve classification performances further. Experiments on three challenging databases, namely, Labeled faces in the wild, Gallagher group photos, and Adience, demonstrate the effectiveness of the proposed method relative to that of state-of-the-art approaches.
Qu, Yufu; Zou, Zhaofan
2017-10-16
Photographic images taken in foggy or hazy weather (hazy images) exhibit poor visibility and detail because of scattering and attenuation of light caused by suspended particles, and therefore, image dehazing has attracted considerable research attention. The current polarization-based dehazing algorithms strongly rely on the presence of a "sky area", and thus, the selection of model parameters is susceptible to external interference of high-brightness objects and strong light sources. In addition, the noise of the restored image is large. In order to solve these problems, we propose a polarization-based dehazing algorithm that does not rely on the sky area ("non-sky"). First, a linear polarizer is used to collect three polarized images. The maximum- and minimum-intensity images are then obtained by calculation, assuming the polarization of light emanating from objects is negligible in most scenarios involving non-specular objects. Subsequently, the polarization difference of the two images is used to determine a sky area and calculate the infinite atmospheric light value. Next, using the global features of the image, and based on the assumption that the airlight and object radiance are irrelevant, the degree of polarization of the airlight (DPA) is calculated by solving for the optimal solution of the correlation coefficient equation between airlight and object radiance; the optimal solution is obtained by setting the right-hand side of the equation to zero. Then, the hazy image is subjected to dehazing. Subsequently, a filtering denoising algorithm, which combines the polarization difference information and block-matching and 3D (BM3D) filtering, is designed to filter the image smoothly. Our experimental results show that the proposed polarization-based dehazing algorithm does not depend on whether the image includes a sky area and does not require complex models. Moreover, the dehazing image except specular object scenarios is superior to those obtained by Tarel, Fattal, Ren, and Berman based on the criteria of no-reference quality assessment (NRQA), blind/referenceless image spatial quality evaluator (BRISQUE), blind anistropic quality index (AQI), and e.
Texture analysis applied to second harmonic generation image data for ovarian cancer classification
NASA Astrophysics Data System (ADS)
Wen, Bruce L.; Brewer, Molly A.; Nadiarnykh, Oleg; Hocker, James; Singh, Vikas; Mackie, Thomas R.; Campagnola, Paul J.
2014-09-01
Remodeling of the extracellular matrix has been implicated in ovarian cancer. To quantitate the remodeling, we implement a form of texture analysis to delineate the collagen fibrillar morphology observed in second harmonic generation microscopy images of human normal and high grade malignant ovarian tissues. In the learning stage, a dictionary of "textons"-frequently occurring texture features that are identified by measuring the image response to a filter bank of various shapes, sizes, and orientations-is created. By calculating a representative model based on the texton distribution for each tissue type using a training set of respective second harmonic generation images, we then perform classification between images of normal and high grade malignant ovarian tissues. By optimizing the number of textons and nearest neighbors, we achieved classification accuracy up to 97% based on the area under receiver operating characteristic curves (true positives versus false positives). The local analysis algorithm is a more general method to probe rapidly changing fibrillar morphologies than global analyses such as FFT. It is also more versatile than other texture approaches as the filter bank can be highly tailored to specific applications (e.g., different disease states) by creating customized libraries based on common image features.
Texture analysis with statistical methods for wheat ear extraction
NASA Astrophysics Data System (ADS)
Bakhouche, M.; Cointault, F.; Gouton, P.
2007-01-01
In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.
Automatic Detection of Diseased Tomato Plants Using Thermal and Stereo Visible Light Images
Raza, Shan-e-Ahmed; Prince, Gillian; Clarkson, John P.; Rajpoot, Nasir M.
2015-01-01
Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission. PMID:25861025
Winds at the Phoenix Landing Site
NASA Astrophysics Data System (ADS)
Holstein-Rathlou, C.; Gunnlaugsson, H. P.; Taylor, P.; Lange, C.; Moores, J.; Lemmon, M.
2008-12-01
Local wind speeds and directions have been measured at the Phoenix landing site using the Telltale wind indicator. The Telltale is mounted on top of the meteorological mast at roughly 2 meters height above the surface. The Telltale is a mechanical anemometer consisting of a lightweight cylinder suspended by Kevlar fibers that are deflected under the action of wind. Images taken with the Surface Stereo Imager (SSI) of the Telltale deflection allows the wind speed and direction to be quantified. Winds aloft have been estimated using image series (10 images ~ 50 s apart) taken of the Zenith (Zenith Movies). In contrast enhanced images cloud like features are seen to move through the image field and give indication of directions and angular speed. Wind speeds depend on the height of where these features originate while directions are unambiguously determined. The wind data shows dominant wind directions and diurnal variations, likely caused by slope winds. Recent night time measurements show frost formation on the Telltale mirror. The results will be discussed in terms of global and slope wind modeling and the current calibration of the data is discussed. It will also be illustrated how wind data can aid in interpreting temperature fluctuations seen on the lander.
A Multistage Approach for Image Registration.
Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi
2016-09-01
Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Grouping of optic flow stimuli during binocular rivalry is driven by monocular information.
Holten, Vivian; Stuit, Sjoerd M; Verstraten, Frans A J; van der Smagt, Maarten J
2016-10-01
During binocular rivalry, perception alternates between two dissimilar images, presented dichoptically. Although binocular rivalry is thought to result from competition at a local level, neighboring image parts with similar features tend to be perceived together for longer durations than image parts with dissimilar features. This simultaneous dominance of two image parts is called grouping during rivalry. Previous studies have shown that this grouping depends on a shared eye-of-origin to a much larger extent than on image content, irrespective of the complexity of a static image. In the current study, we examine whether grouping of dynamic optic flow patterns is also primarily driven by monocular (eye-of-origin) information. In addition, we examine whether image parameters, such as optic flow direction, and partial versus full visibility of the optic flow pattern, affect grouping durations during rivalry. The results show that grouping of optic flow is, as is known for static images, primarily affected by its eye-of-origin. Furthermore, global motion can affect grouping durations, but only under specific conditions. Namely, only when the two full optic flow patterns were presented locally. These results suggest that grouping during rivalry is primarily driven by monocular information even for motion stimuli thought to rely on higher-level motion areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
Liu, Dan; Hu, Kai; Nordbeck, Peter; Ertl, Georg; Störk, Stefan; Weidemann, Frank
2016-05-10
Despite substantial advances in the imaging techniques and pathophysiological understanding over the last decades, identification of the underlying causes of left ventricular hypertrophy by means of echocardiographic examination remains a challenge in current clinical practice. The longitudinal strain bull's eye plot derived from 2D speckle tracking imaging offers an intuitive visual overview of the global and regional left ventricular myocardial function in a single diagram. The bull's eye mapping is clinically feasible and the plot patterns could provide clues to the etiology of cardiomyopathies. The present review summarizes the longitudinal strain, bull's eye plot features in patients with various cardiomyopathies and concentric left ventricular hypertrophy and the bull's eye plot features might serve as one of the cardiac workup steps on evaluating patients with left ventricular hypertrophy.
Tan, Chun-Wei; Kumar, Ajay
2014-07-10
Accurate iris recognition from the distantly acquired face or eye images requires development of effective strategies which can account for significant variations in the segmented iris image quality. Such variations can be highly correlated with the consistency of encoded iris features and the knowledge that such fragile bits can be exploited to improve matching accuracy. A non-linear approach to simultaneously account for both local consistency of iris bit and also the overall quality of the weight map is proposed. Our approach therefore more effectively penalizes the fragile bits while simultaneously rewarding more consistent bits. In order to achieve more stable characterization of local iris features, a Zernike moment-based phase encoding of iris features is proposed. Such Zernike moments-based phase features are computed from the partially overlapping regions to more effectively accommodate local pixel region variations in the normalized iris images. A joint strategy is adopted to simultaneously extract and combine both the global and localized iris features. The superiority of the proposed iris matching strategy is ascertained by providing comparison with several state-of-the-art iris matching algorithms on three publicly available databases: UBIRIS.v2, FRGC, CASIA.v4-distance. Our experimental results suggest that proposed strategy can achieve significant improvement in iris matching accuracy over those competing approaches in the literature, i.e., average improvement of 54.3%, 32.7% and 42.6% in equal error rates, respectively for UBIRIS.v2, FRGC, CASIA.v4-distance.
Fusion of multichannel local and global structural cues for photo aesthetics evaluation.
Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li
2014-03-01
Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.
Surface conversion techniques for low energy neutral atom imagers
NASA Technical Reports Server (NTRS)
Quinn, J. M.
1995-01-01
This investigation has focused on development of key technology elements for low energy neutral atom imaging. More specifically, we have investigated the conversion of low energy neutral atoms to negatively charged ions upon reflection from specially prepared surfaces. This 'surface conversion' technique appears to offer a unique capability of detecting, and thus imaging, neutral atoms at energies of 0.01 - 1 keV with high enough efficiencies to make practical its application to low energy neutral atom imaging in space. Such imaging offers the opportunity to obtain the first instantaneous global maps of macroscopic plasma features and their temporal variation. Through previous in situ plasma measurements, we have a statistical picture of large scale morphology and local measurements of dynamic processes. However, with in situ techniques it is impossible to characterize or understand many of the global plasma transport and energization processes. A series of global plasma images would greatly advance our understanding of these processes and would provide the context for interpreting previous and future in situ measurements. Fast neutral atoms, created from ions that are neutralized in collisions with exospheric neutrals, offer the means for remotely imaging plasma populations. Energy and mass analysis of these neutrals provides critical information about the source plasma distribution. The flux of neutral atoms available for imaging depends upon a convolution of the ambient plasma distribution with the charge exchange cross section for the background neutral population. Some of the highest signals are at relatively low energies (well below 1 keV). This energy range also includes some of the most important plasma populations to be imaged, for example the base of the cleft ion fountain.
Automatic classification of tissue malignancy for breast carcinoma diagnosis.
Fondón, Irene; Sarmiento, Auxiliadora; García, Ana Isabel; Silvestre, María; Eloy, Catarina; Polónia, António; Aguiar, Paulo
2018-05-01
Breast cancer is the second leading cause of cancer death among women. Its early diagnosis is extremely important to prevent avoidable deaths. However, malignancy assessment of tissue biopsies is complex and dependent on observer subjectivity. Moreover, hematoxylin and eosin (H&E)-stained histological images exhibit a highly variable appearance, even within the same malignancy level. In this paper, we propose a computer-aided diagnosis (CAD) tool for automated malignancy assessment of breast tissue samples based on the processing of histological images. We provide four malignancy levels as the output of the system: normal, benign, in situ and invasive. The method is based on the calculation of three sets of features related to nuclei, colour regions and textures considering local characteristics and global image properties. By taking advantage of well-established image processing techniques, we build a feature vector for each image that serves as an input to an SVM (Support Vector Machine) classifier with a quadratic kernel. The method has been rigorously evaluated, first with a 5-fold cross-validation within an initial set of 120 images, second with an external set of 30 different images and third with images with artefacts included. Accuracy levels range from 75.8% when the 5-fold cross-validation was performed to 75% with the external set of new images and 61.11% when the extremely difficult images were added to the classification experiment. The experimental results indicate that the proposed method is capable of distinguishing between four malignancy levels with high accuracy. Our results are close to those obtained with recent deep learning-based methods. Moreover, it performs better than other state-of-the-art methods based on feature extraction, and it can help improve the CAD of breast cancer. Copyright © 2018 Elsevier Ltd. All rights reserved.
Disappearance of the Propontis Regional Dark Albedo Feature on Mars
NASA Astrophysics Data System (ADS)
Lee, Steven W.; Thomas, P. C.; Cantor, B. A.
2013-10-01
The appearance of Propontis, one of many distinct classical dark albedo features on Mars, has been documented by ground-based observers for well over a century; Propontis was once thought to be the location of a “typical Martian canal”. The roughly circular feature (centered at 38°N, 179°W) covers about 500km in north-south extent. Modern spacecraft observations have shown the northern plains in which Propontis is located to include many subdued craters, knobs, and troughs. Observations by the Mars Color Imager (MARCI) onboard the Mars Reconnaissance Orbiter (MRO) have documented dramatic changes in the Propontis feature during August 2009. Daily MARCI mosaics (spatial resolution of 1 km/pixel) revealed extensive dust storm activity in this region over a ten day period (August 16-25, Ls ~ 322°-327°). At this time, the north polar seasonal ice cap was at maximum extent (reaching southward to about 55°N), and dust storm activity was frequently observed southward of the seasonal cap. These storms apparently led to sufficient deposition of bright dust to effectively “erase” the dark Propontis feature - yielding one of the most significant changes in regional albedo since Mars Global Surveyor began routine global mapping in 1997. Only minor changes have been detected over the course of repeated MARCI observations of this region since late-2009 - Propontis has not yet “recovered” to its previous extent and appearance. MRO is expected to provide ongoing MARCI mapping, enhanced with regular Context Imager (CTX, spatial resolution of 6 m/pixel) monitoring. An overview of the accumulated observations to date will be presented, along with interpretation of the magnitude of sediment transport required to account for the observed changes in Propontis.
Special feature on imaging systems and techniques
NASA Astrophysics Data System (ADS)
Yang, Wuqiang; Giakos, George
2013-07-01
The IEEE International Conference on Imaging Systems and Techniques (IST'2012) was held in Manchester, UK, on 16-17 July 2012. The participants came from 26 countries or regions: Austria, Brazil, Canada, China, Denmark, France, Germany, Greece, India, Iran, Iraq, Italy, Japan, Korea, Latvia, Malaysia, Norway, Poland, Portugal, Sweden, Switzerland, Taiwan, Tunisia, UAE, UK and USA. The technical program of the conference consisted of a series of scientific and technical sessions, exploring physical principles, engineering and applications of new imaging systems and techniques, as reflected by the diversity of the submitted papers. Following a rigorous review process, a total of 123 papers were accepted, and they were organized into 30 oral presentation sessions and a poster session. In addition, six invited keynotes were arranged. The conference not only provided the participants with a unique opportunity to exchange ideas and disseminate research outcomes but also paved a way to establish global collaboration. Following the IST'2012, a total of 55 papers, which were technically extended substantially from their versions in the conference proceeding, were submitted as regular papers to this special feature of Measurement Science and Technology . Following a rigorous reviewing process, 25 papers have been finally accepted for publication in this special feature and they are organized into three categories: (1) industrial tomography, (2) imaging systems and techniques and (3) image processing. These papers not only present the latest developments in the field of imaging systems and techniques but also offer potential solutions to existing problems. We hope that this special feature provides a good reference for researchers who are active in the field and will serve as a catalyst to trigger further research. It has been our great pleasure to be the guest editors of this special feature. We would like to thank the authors for their contributions, without which it would not be possible to have this special feature published. We are grateful to all reviewers, who devoted their time and effort, on a voluntary basis, to ensure that all submissions were reviewed rigorously and fairly. The publishing staff of Measurement Science and Technology are particularly acknowledged for giving us timely advice on guest-editing this special feature.
Mars Orbiter Camera Views the 'Face on Mars' - Calibrated, contrast enhanced, filtered,
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long. Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps: The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking. The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales. The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth. A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.See PIA01440-1442 for additional processing steps. Also see PIA01236 for the raw image.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Mars Orbiter Camera Views the 'Face on Mars' - Calibrated, contrast enhanced, filtered
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long. Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps: The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking. The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales. The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth. A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.See PIA01441-1442 for additional processing steps. Also see PIA01236 for the raw image.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.NASA Astrophysics Data System (ADS)
Orton, Glenn; Hansen, Candice; Momary, Thomas; Bolton, Scott
2017-04-01
Among the many "firsts" of the Juno mission is the open enlistment of the public in the operation of its visible camera, JunoCam. Although the scientific thrust of the Juno mission is largely focused on innovative approaches to understanding the structure and composition of Jupiter's interior, JunoCam was added to the payload largely to function in the role of education and public outreach (E/PO). For the first time, the public was able to engage in the discussion and choice of targets for a major NASA mission. The discussion about which features to image is enabled by a continuously updated map of Jupiter's cloud system while Jupiter is far enough from the sun to be observable by non-professional astronomers. Contributors range from very devoted astrophotographers to telescope and video 'hobbyists'. Juno therefore engages the world-wide amateur-astronomy community as a vast network of co-investigators, whose products stimulate conversation and global public awareness of Jupiter and Juno's investigative role. Contributed images also provide a temporal context to inform the Juno atmospheric investigation team of the state and evolution of the atmosphere. The contributed images are used to create s global map on a bi-weekly basis. These bi-weekly maps provide the focus for ongoing discussion about various planetary features over a long time frame. Approximately two weeks before Juno's closest approach to Jupiter on each orbit ("perijove" or PJ), starting in mid-November of 2016 in preparation for PJ3 on December 11, the atmospheric features that have been under discussion and available to JunoCam on that perijove were nominated for voting, and the public at large voted on where to point JunoCam's "elective" features. In addition, JunoCam provides the first close-up images of Jupiter's polar regions from a non-oblique viewpoint for the first time in over 40 years since the passage of Pioneer 11 over Jupiter's north pole. The Juno mission science team also provides additional comments on features from their various points of view, but Juno's science team has no greater weighting in the voting process than the public at large, short of an extraordinary event, such as an impact event or a sudden atmospheric outburst. Public voting was tested for the first time on three regions for PJ3 and has continued for PJ4 and PJ5 with voting on nearly all non-polar images. One of the big challenges in this process was the accurate prediction of which features would be in the field of view at the time of the perijove some 10 days following the end of voting, due to Jupiter's differential rotation. The results of public processing and re-posting of JunoCam images have ranged all the way from artistic renditions up to professional-equivalent analysis that is equivalent to anything JunoCam team members could have produced. All aspects of this effort are available on the Mission Juno web site, linked to the JunoCam instrument (https://www.missionjuno.swri.edu/junocam/).
Evolution of regional to global paddy rice mapping methods: A review
NASA Astrophysics Data System (ADS)
Dong, Jinwei; Xiao, Xiangming
2016-09-01
Paddy rice agriculture plays an important role in various environmental issues including food security, water use, climate change, and disease transmission. However, regional and global paddy rice maps are surprisingly scarce and sporadic despite numerous efforts in paddy rice mapping algorithms and applications. With the increasing need for regional to global paddy rice maps, this paper reviewed the existing paddy rice mapping methods from the literatures ranging from the 1980s to 2015. In particular, we illustrated the evolution of these paddy rice mapping efforts, looking specifically at the future trajectory of paddy rice mapping methodologies. The biophysical features and growth phases of paddy rice were analyzed first, and feature selections for paddy rice mapping were analyzed from spectral, polarimetric, temporal, spatial, and textural aspects. We sorted out paddy rice mapping algorithms into four categories: (1) Reflectance data and image statistic-based approaches, (2) vegetation index (VI) data and enhanced image statistic-based approaches, (3) VI or RADAR backscatter-based temporal analysis approaches, and (4) phenology-based approaches through remote sensing recognition of key growth phases. The phenology-based approaches using unique features of paddy rice (e.g., transplanting) for mapping have been increasingly used in paddy rice mapping. Current applications of these phenology-based approaches generally use coarse resolution MODIS data, which involves mixed pixel issues in Asia where smallholders comprise the majority of paddy rice agriculture. The free release of Landsat archive data and the launch of Landsat 8 and Sentinel-2 are providing unprecedented opportunities to map paddy rice in fragmented landscapes with higher spatial resolution. Based on the literature review, we discussed a series of issues for large scale operational paddy rice mapping.
Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.
Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou
2017-05-10
Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.
Galván-Tejada, Carlos E.; Zanella-Calzada, Laura A.; Galván-Tejada, Jorge I.; Celaya-Padilla, José M.; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L.
2017-01-01
Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions. PMID:28216571
Galván-Tejada, Carlos E; Zanella-Calzada, Laura A; Galván-Tejada, Jorge I; Celaya-Padilla, José M; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L
2017-02-14
Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions.
Fine-Granularity Functional Interaction Signatures for Characterization of Brain Conditions
Hu, Xintao; Zhu, Dajiang; Lv, Peili; Li, Kaiming; Han, Junwei; Wang, Lihong; Shen, Dinggang; Guo, Lei; Liu, Tianming
2014-01-01
In the human brain, functional activity occurs at multiple spatial scales. Current studies on functional brain networks and their alterations in brain diseases via resting-state functional magnetic resonance imaging (rs-fMRI) are generally either at local scale (regionally confined analysis and inter-regional functional connectivity analysis) or at global scale (graph theoretic analysis). In contrast, inferring functional interaction at fine-granularity sub-network scale has not been adequately explored yet. Here our hypothesis is that functional interaction measured at fine-granularity subnetwork scale can provide new insight into the neural mechanisms of neurological and psychological conditions, thus offering complementary information for healthy and diseased population classification. In this paper, we derived fine-granularity functional interaction (FGFI) signatures in subjects with Mild Cognitive Impairment (MCI) and Schizophrenia by diffusion tensor imaging (DTI) and rsfMRI, and used patient-control classification experiments to evaluate the distinctiveness of the derived FGFI features. Our experimental results have shown that the FGFI features alone can achieve comparable classification performance compared with the commonly used inter-regional connectivity features. However, the classification performance can be substantially improved when FGFI features and inter-regional connectivity features are integrated, suggesting the complementary information achieved from the FGFI signatures. PMID:23319242
NASA Technical Reports Server (NTRS)
Saunders, R. S.; Spear, A. J.; Allin, P. C.; Austin, R. S.; Berman, A. L.; Chandlee, R. C.; Clark, J.; Decharon, A. V.; De Jong, E. M.; Griffith, D. G.
1992-01-01
Magellan started mapping the planet Venus on September 15, 1990, and after one cycle (one Venus day or 243 earth days) had mapped 84 percent of the planet's surface. This returned an image data volume greater than all past planetary missions combined. Spacecraft problems were experienced in flight. Changes in operational procedures and reprogramming of onboard computers minimized the amount of mapping data lost. Magellan data processing is the largest planetary image-processing challenge to date. Compilation of global maps of tectonic and volcanic features, as well as impact craters and related phenomena and surface processes related to wind, weathering, and mass wasting, has begun. The Magellan project is now in an extended mission phase, with plans for additional cycles out to 1995. The Magellan project will fill in mapping gaps, obtain a global gravity data set between mid-September 1992 and May 1993, acquire images at different view angles, and look for changes on the surface from one cycle to another caused by surface activity such as volcanism, faulting, or wind activity.
Bednarkiewicz, Artur; Whelan, Maurice P
2008-01-01
Fluorescence lifetime imaging (FLIM) is very demanding from a technical and computational perspective, and the output is usually a compromise between acquisition/processing time and data accuracy and precision. We present a new approach to acquisition, analysis, and reconstruction of microscopic FLIM images by employing a digital micromirror device (DMD) as a spatial illuminator. In the first step, the whole field fluorescence image is collected by a color charge-coupled device (CCD) camera. Further qualitative spectral analysis and sample segmentation are performed to spatially distinguish between spectrally different regions on the sample. Next, the fluorescence of the sample is excited segment by segment, and fluorescence lifetimes are acquired with a photon counting technique. FLIM image reconstruction is performed by either raster scanning the sample or by directly accessing specific regions of interest. The unique features of the DMD illuminator allow the rapid on-line measurement of global good initial parameters (GIP), which are supplied to the first iteration of the fitting algorithm. As a consequence, a decrease of the computation time required to obtain a satisfactory quality-of-fit is achieved without compromising the accuracy and precision of the lifetime measurements.
One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.
Biswas, Sujoy Kumar; Milanfar, Peyman
2016-03-01
One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.
Mars Orbiter Camera Views the 'Face on Mars' - Best View from Viking
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.This Viking Orbiter image is one of the best Viking pictures of the area Cydonia where the 'Face' is located. Marked on the image are the 'footprint' of the high resolution (narrow angle) Mars Orbiter Camera image and the area seen in enlarged views (dashed box). See PIA01440-1442 for these images in raw and processed form.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Multithreaded hybrid feature tracking for markerless augmented reality.
Lee, Taehee; Höllerer, Tobias
2009-01-01
We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.
Cloud Detection by Fusing Multi-Scale Convolutional Features
NASA Astrophysics Data System (ADS)
Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang
2018-04-01
Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.
Sub-ice volcanoes and ancient oceans/lakes: A Martian challenge
Chapman, M.G.
2003-01-01
New instruments on board the Mars Global Surveyor (MGS) spacecraft began providing accurate, high-resolution image and topography data from the planet in 1997. Though data from the Mars Orbiter Laser Altimeter (MOLA) are consistent with hypotheses that suggest large standing bodies of water/ice in the northern lowlands in the planet's past history, Mars Orbiter Camera (MOC) images acquired to test these hypotheses have provided negative or ambiguous results. In the absence of classic coastal features to test the paleo-ocean hypothesis, other indicators need to be examined. Tuyas and hyaloclastic ridges are subice volcanoes of unique appearance that form in ponded water conditions on Earth. Features with similar characteristics occur on Mars. MOLA analyses of these Martian features provide estimates of the height of putative ice/water columns at the edge of the Utopia Planitia basin and within Ophir Chasma of Valles Marineris, and support the hypotheses of a northern ocean on Mars. ?? 2003 Elsevier Science B.V. All rights reserved.
Varying face occlusion detection and iterative recovery for face recognition
NASA Astrophysics Data System (ADS)
Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei
2017-05-01
In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.
A SPAD-based 3D imager with in-pixel TDC for 145ps-accuracy ToF measurement
NASA Astrophysics Data System (ADS)
Vornicu, I.; Carmona-Galán, R.; Rodríguez-Vázquez, Á.
2015-03-01
The design and measurements of a CMOS 64 × 64 Single-Photon Avalanche-Diode (SPAD) array with in-pixel Time-to-Digital Converter (TDC) are presented. This paper thoroughly describes the imager at architectural and circuit level with particular emphasis on the characterization of the SPAD-detector ensemble. It is aimed to 2D imaging and 3D image reconstruction in low light environments. It has been fabricated in a standard 0.18μm CMOS process, i. e. without high voltage or low noise features. In these circumstances, we are facing a high number of dark counts and low photon detection efficiency. Several techniques have been applied to ensure proper functionality, namely: i) time-gated SPAD front-end with fast active-quenching/recharge circuit featuring tunable dead-time, ii) reverse start-stop scheme, iii) programmable time resolution of the TDC based on a novel pseudo-differential voltage controlled ring oscillator with fast start-up, iv) a global calibration scheme against temperature and process variation. Measurements results of individual SPAD-TDC ensemble jitter, array uniformity and time resolution programmability are also provided.
Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios
2013-08-01
Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.
Benedek, C; Descombes, X; Zerubia, J
2012-01-01
In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.
Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara; Reale, Oreste
2002-01-01
Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.
Managing biomedical image metadata for search and retrieval of similar images.
Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris
2011-08-01
Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.
A statistical parts-based appearance model of inter-subject variability.
Toews, Matthew; Collins, D Louis; Arbel, Tal
2006-01-01
In this article, we present a general statistical parts-based model for representing the appearance of an image set, applied to the problem of inter-subject MR brain image matching. In contrast with global image representations such as active appearance models, the parts-based model consists of a collection of localized image parts whose appearance, geometry and occurrence frequency are quantified statistically. The parts-based approach explicitly addresses the case where one-to-one correspondence does not exist between subjects due to anatomical differences, as parts are not expected to occur in all subjects. The model can be learned automatically, discovering structures that appear with statistical regularity in a large set of subject images, and can be robustly fit to new images, all in the presence of significant inter-subject variability. As parts are derived from generic scale-invariant features, the framework can be applied in a wide variety of image contexts, in order to study the commonality of anatomical parts or to group subjects according to the parts they share. Experimentation shows that a parts-based model can be learned from a large set of MR brain images, and used to determine parts that are common within the group of subjects. Preliminary results indicate that the model can be used to automatically identify distinctive features for inter-subject image registration despite large changes in appearance.
Wood texture classification by fuzzy neural networks
NASA Astrophysics Data System (ADS)
Gonzaga, Adilson; de Franca, Celso A.; Frere, Annie F.
1999-03-01
The majority of scientific papers focusing on wood classification for pencil manufacturing take into account defects and visual appearance. Traditional methodologies are base don texture analysis by co-occurrence matrix, by image modeling, or by tonal measures over the plate surface. In this work, we propose to classify plates of wood without biological defects like insect holes, nodes, and cracks, by analyzing their texture. By this methodology we divide the plate image in several rectangular windows or local areas and reduce the number of gray levels. From each local area, we compute the histogram of difference sand extract texture features, given them as input to a Local Neuro-Fuzzy Network. Those features are from the histogram of differences instead of the image pixels due to their better performance and illumination independence. Among several features like media, contrast, second moment, entropy, and IDN, the last three ones have showed better results for network training. Each LNN output is taken as input to a Partial Neuro-Fuzzy Network (PNFN) classifying a pencil region on the plate. At last, the outputs from the PNFN are taken as input to a Global Fuzzy Logic doing the plate classification. Each pencil classification within the plate is done taking into account each quality index.
Fast and robust segmentation of the striatum using deep convolutional neural networks.
Choi, Hongyoon; Jin, Kyong Hwan
2016-12-01
Automated segmentation of brain structures is an important task in structural and functional image analysis. We developed a fast and accurate method for the striatum segmentation using deep convolutional neural networks (CNN). T1 magnetic resonance (MR) images were used for our CNN-based segmentation, which require neither image feature extraction nor nonlinear transformation. We employed two serial CNN, Global and Local CNN: The Global CNN determined approximate locations of the striatum. It performed a regression of input MR images fitted to smoothed segmentation maps of the striatum. From the output volume of Global CNN, cropped MR volumes which included the striatum were extracted. The cropped MR volumes and the output volumes of Global CNN were used for inputs of Local CNN. Local CNN predicted the accurate label of all voxels. Segmentation results were compared with a widely used segmentation method, FreeSurfer. Our method showed higher Dice Similarity Coefficient (DSC) (0.893±0.017 vs. 0.786±0.015) and precision score (0.905±0.018 vs. 0.690±0.022) than FreeSurfer-based striatum segmentation (p=0.06). Our approach was also tested using another independent dataset, which showed high DSC (0.826±0.038) comparable with that of FreeSurfer. Comparison with existing method Segmentation performance of our proposed method was comparable with that of FreeSurfer. The running time of our approach was approximately three seconds. We suggested a fast and accurate deep CNN-based segmentation for small brain structures which can be widely applied to brain image analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Riffel, Johannes H; Keller, Marius G P; Aurich, Matthias; Sander, Yannick; Andre, Florian; Giusca, Sorin; Aus dem Siepen, Fabian; Seitz, Sebastian; Galuschky, Christian; Korosoglou, Grigorios; Mereles, Derliz; Katus, Hugo A; Buss, Sebastian J
2015-07-01
Myocardial deformation measurement is superior to left ventricular ejection fraction in identifying early changes in myocardial contractility and prediction of cardiovascular outcome. The lack of standardization hinders its clinical implementation. The aim of the study is to investigate a novel standardized deformation imaging approach based on the feature tracking algorithm for the assessment of global longitudinal (GLS) and global circumferential strain (GCS) in echocardiography and cardiac magnetic resonance imaging (CMR). 70 subjects undergoing CMR were consecutively investigated with echocardiography within a median time of 30 min. GLS and GCS were analyzed with a post-processing software incorporating the same standardized algorithm for both modalities. Global strain was defined as the relative shortening of the whole endocardial contour length and calculated according to the strain formula. Mean GLS values were -16.2 ± 5.3 and -17.3 ± 5.3 % for echocardiography and CMR, respectively. GLS did not differ significantly between the two imaging modalities, which showed strong correlation (r = 0.86), a small bias (-1.1 %) and narrow 95 % limits of agreement (LOA ± 5.4 %). Mean GCS values were -17.9 ± 6.3 and -24.4 ± 7.8 % for echocardiography and CMR, respectively. GCS was significantly underestimated by echocardiography (p < 0.001). A weaker correlation (r = 0.73), a higher bias (-6.5 %) and wider LOA (± 10.5 %) were observed for GCS. GLS showed a strong correlation (r = 0.92) when image quality was good, while correlation dropped to r = 0.82 with poor acoustic windows in echocardiography. GCS assessment revealed only a strong correlation (r = 0.87) when echocardiographic image quality was good. No significant differences for GLS between two different echocardiographic vendors could be detected. Quantitative assessment of GLS using a standardized software algorithm allows the direct comparison of values acquired irrespective of the imaging modality. GLS may, therefore, serve as a reliable parameter for the assessment of global left ventricular function in clinical routine besides standard evaluation of the ejection fraction.
Phenotype detection in morphological mutant mice using deformation features.
Roy, Sharmili; Liang, Xi; Kitamoto, Asanobu; Tamura, Masaru; Shiroishi, Toshihiko; Brown, Michael S
2013-01-01
Large-scale global efforts are underway to knockout each of the approximately 25,000 mouse genes and interpret their roles in shaping the mammalian embryo. Given the tremendous amount of data generated by imaging mutated prenatal mice, high-throughput image analysis systems are inevitable to characterize mammalian development and diseases. Current state-of-the-art computational systems offer only differential volumetric analysis of pre-defined anatomical structures between various gene-knockout mice strains. For subtle anatomical phenotypes, embryo phenotyping still relies on the laborious histological techniques that are clearly unsuitable in such big data environment. This paper presents a system that automatically detects known phenotypes and assists in discovering novel phenotypes in muCT images of mutant mice. Deformation features obtained from non-linear registration of mutant embryo to a normal consensus average image are extracted and analyzed to compute phenotypic and candidate phenotypic areas. The presented system is evaluated using C57BL/10 embryo images. All cases of ventricular septum defect and polydactyly, well-known to be present in this strain, are successfully detected. The system predicts potential phenotypic areas in the liver that are under active histological evaluation for possible phenotype of this mouse line.
Grabens on Io: Evidence for Extensional Tectonics
NASA Astrophysics Data System (ADS)
Hoogenboom, T.; Schenk, P.
2012-12-01
Io may well be the most geologically active body in the solar system. A variety of volcanic features have been identified, including a few fissure eruptions, but tectonism is generally assumed to be limited to compression driven mountain formation (Schenk et al., 2001). A wide range of structural features can also be identified including scarps, lineaments, faults, and circular depressions (pits and patera rims). Narrow curvilinear graben (elongated, relatively depressed crustal unit or block that is bounded by faults on its sides) are also scattered across Io's volcanic plains. These features are dwarfed by the more prominent neighboring volcanoes and mountains, and have been largely ignored in the literature. Although they are likely to be extensional in origin, their relationship to local or global stress fields is unknown. We have mapped the locations, length and width of graben on Io using all available Voyager and Galileo images with a resolution better than 5 km. We compare the locations of graben with existing volcanic centers, paterae and mountain data to determine the degree of correlation between these geologic features and major topographic variations (basins/swells) in our global topographic map of Io (White et al., 2011). Graben are best observed in > 1-2 km low-sun angle images. Approximately 300 images were converted from ISIS to ArcMap format to allow easy comparison with the geological map of Io (Williams et al., 2012) along with previous higher resolution structural mapping of local areas (e.g. Crown et al., 1992). We have located >45 graben to date. Typically 1-3 kilometers across, some of these features can stretch for over 500 kilometers in length. Their formation may be related to global tidal stresses or local deformation. Io's orbit is eccentric and its solid surface experiences daily tides of up to ˜0.1 km, leading to repetitive surface strains of 10-4 or greater. These tides flex and stress the lithosphere and can cause it to fracture (as also occurs extensively on neighboring Europa). The record can be confused if the features formed at different times or if the stress pattern shifts due to nonsynchronous rotation of the lithosphere (Milazzo et al., 2001). Alternatively, curvilinear or concentric extensional fractures (graben) could be related to local loading of planetary lithospheres. On Io, this could be the result of construction of volcanic edifices or global convection patterns forming localized sites of upwelling and downwelling (e.g., Tackley et al., 2001). However, constructional volcanic edifices are quite rare on Io (Schenk et al., 2004a) and convective stresses on Io are likely to be quite small (Kirchoff and McKinnon, 2009). An obvious caveat to stress analyses is the possibility of resurfacing locally erasing tectonic signatures of graben, in part or entirely. Despite resurfacing, erosional and tectonic scarps, lineaments and grabens are relatively abundant at all latitudes and longitudes on Io, given the limited global mapping. Grabens are typically not found on the younger units, suggesting that tectonic forces on Io were of greater magnitude in the past, that much of the surface is very young and has not yet undergone deformation, or that only with age do the surface materials become strong enough to deform by brittle failure rather than ductile flow (Whitford-Stark et al., 1990).
Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.
Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu
2017-01-01
Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.
NASA Astrophysics Data System (ADS)
Liu, Xiaoqi; Wang, Chengliang; Bai, Jianying; Liao, Guobin
2018-02-01
Portal hypertensive gastropathy (PHG) is common in gastrointestinal (GI) diseases, and a severe stage of PHG (S-PHG) is a source of gastrointestinal active bleeding. Generally, the diagnosis of PHG is made visually during endoscopic examination; compared with traditional endoscopy, (wireless capsule endoscopy) WCE with noninvasive and painless is chosen as a prevalent tool for visual observation of PHG. However, accurate measurement of WCE images with PHG is a difficult task due to faint contrast and confusing variations in background gastric mucosal tissue for physicians. Therefore, this paper proposes a comprehensive methodology to automatically detect S-PHG images in WCE video to help physicians accurately diagnose S-PHG. Firstly, a rough dominatecolor-tone extraction approach is proposed for better describing global color distribution information of gastric mucosa. Secondly, a hybrid two-layer texture acquisition model is designed by integrating co-occurrence matrix into local binary pattern to depict complex and unique gastric mucosal microstructure local variation. Finally, features of mucosal color and microstructure texture are merged into linear support vector machine to accomplish this automatic classification task. Experiments were implemented on an annotated data set including 1,050 SPHG and 1,370 normal images collected from 36 real patients of different nationalities, ages and genders. By comparison with three traditional texture extraction methods, our method, combined with experimental results, performs best in detection of S-PHG images in WCE video: the maximum of accuracy, sensitivity and specificity reach 0.90, 0.92 and 0.92 respectively.
Detection of Unilateral Hearing Loss by Stationary Wavelet Entropy.
Zhang, Yudong; Nayak, Deepak Ranjan; Yang, Ming; Yuan, Ti-Fei; Liu, Bin; Lu, Huimin; Wang, Shuihua
2017-01-01
Sensorineural hearing loss is correlated to massive neurological or psychiatric disease. T1-weighted volumetric images were acquired from fourteen subjects with right-sided hearing loss (RHL), fifteen subjects with left-sided hearing loss (LHL), and twenty healthy controls (HC). We treated a three-class classification problem: HC, LHL, and RHL. Stationary wavelet entropy was employed to extract global features from magnetic resonance images of each subject. Those stationary wavelet entropy features were used as input to a single-hidden layer feedforward neuralnetwork classifier. The 10 repetition results of 10-fold cross validation show that the accuracies of HC, LHL, and RHL are 96.94%, 97.14%, and 97.35%, respectively. Our developed system is promising and effective in detecting hearing loss. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Iliyasu, Abdullah M; Fatichah, Chastine
2017-12-19
A quantum hybrid (QH) intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO) method with the intuitionistic rationality of traditional fuzzy k -nearest neighbours (Fuzzy k -NN) algorithm (known simply as the Q-Fuzzy approach) is proposed for efficient feature selection and classification of cells in cervical smeared (CS) images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles) that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection) and another hybrid technique combining the standard PSO algorithm with the Fuzzy k -NN technique (P-Fuzzy approach). In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k -NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
Visions of Our Planet's Atmosphere, Land and Oceans: Electronic-Theater 2000
NASA Technical Reports Server (NTRS)
Hasler, A. F.
2000-01-01
The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to the Delaware Bay and Philadelphia area. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer tropical cyclones & tornadic thunderstorms. See the latest spectacular images from NASA, NOAA & UMETSAT remote sensing missions like GOES, Meteosat, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. see visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including Landsat tours of the US, and Africa with drill downs of major global cities using 1 m resolution commercialized spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. see ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across a super sized panoramic screen.
Global Observation Information Networking: Using the Distributed Image Spreadsheet (DISS)
NASA Technical Reports Server (NTRS)
Hasler, Fritz
1999-01-01
The DISS and many other tools will be used to present visualizations which span the period from the original Suomi/Hasler animations of the first ATS-1 GEO weather satellite images in 1966 ....... to the latest 1999 NASA Earth Science Vision for the next 25 years. Hot off the SGI Onyx Graphics-Supercomputers are NASA's visualizations of Hurricanes Mitch, Georges, Fran and Linda. These storms have been recently featured on the covers of National Geographic, Time, Newsweek and Popular Science and used repeatedly this season on National and International network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1-min GOES images that appeared in the November BAMS.
2017-01-20
This new, detailed global mosaic color map of Pluto is based on a series of three color filter images obtained by the Ralph/Multispectral Visual Imaging Camera aboard New Horizons during the NASA spacecraft's close flyby of Pluto in July 2015. The mosaic shows how Pluto's large-scale color patterns extend beyond the hemisphere facing New Horizons at closest approach- which were imaged at the highest resolution. North is up; Pluto's equator roughly bisects the band of dark red terrains running across the lower third of the map. Pluto's giant, informally named Sputnik Planitia glacier - the left half of Pluto's signature "heart" feature -- is at the center of this map. http://photojournal.jpl.nasa.gov/catalog/PIA11707
NASA Astrophysics Data System (ADS)
Badshah, Amir; Choudhry, Aadil Jaleel; Ullah, Shan
2017-03-01
Industries are moving towards automation in order to increase productivity and ensure quality. Variety of electronic and electromagnetic systems are being employed to assist human operator in fast and accurate quality inspection of products. Majority of these systems are equipped with cameras and rely on diverse image processing algorithms. Information is lost in 2D image, therefore acquiring accurate 3D data from 2D images is an open issue. FAST, SURF and SIFT are well-known spatial domain techniques for features extraction and henceforth image registration to find correspondence between images. The efficiency of these methods is measured in terms of the number of perfect matches found. A novel fast and robust technique for stereo-image processing is proposed. It is based on non-rigid registration using modified normalized phase correlation. The proposed method registers two images in hierarchical fashion using quad-tree structure. The registration process works through global to local level resulting in robust matches even in presence of blur and noise. The computed matches can further be utilized to determine disparity and depth for industrial product inspection. The same can be used in driver assistance systems. The preliminary tests on Middlebury dataset produced satisfactory results. The execution time for a 413 x 370 stereo-pair is 500ms approximately on a low cost DSP.
Globally scalable generation of high-resolution land cover from multispectral imagery
NASA Astrophysics Data System (ADS)
Stutts, S. Craig; Raskob, Benjamin L.; Wenger, Eric J.
2017-05-01
We present an automated method of generating high resolution ( 2 meter) land cover using a pattern recognition neural network trained on spatial and spectral features obtained from over 9000 WorldView multispectral images (MSI) in six distinct world regions. At this resolution, the network can classify small-scale objects such as individual buildings, roads, and irrigation ponds. This paper focuses on three key areas. First, we describe our land cover generation process, which involves the co-registration and aggregation of multiple spatially overlapping MSI, post-aggregation processing, and the registration of land cover to OpenStreetMap (OSM) road vectors using feature correspondence. Second, we discuss the generation of land cover derivative products and their impact in the areas of region reduction and object detection. Finally, we discuss the process of globally scaling land cover generation using cloud computing via Amazon Web Services (AWS).
Offline Signature Verification Using the Discrete Radon Transform and a Hidden Markov Model
NASA Astrophysics Data System (ADS)
Coetzer, J.; Herbst, B. M.; du Preez, J. A.
2004-12-01
We developed a system that automatically authenticates offline handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM). Given the robustness of our algorithm and the fact that only global features are considered, satisfactory results are obtained. Using a database of 924 signatures from 22 writers, our system achieves an equal error rate (EER) of 18% when only high-quality forgeries (skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. These signatures were originally captured offline. Using another database of 4800 signatures from 51 writers, our system achieves an EER of 12.2% when only skilled forgeries are considered. These signatures were originally captured online and then digitally converted into static signature images. These results compare well with the results of other algorithms that consider only global features.
Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation
NASA Astrophysics Data System (ADS)
Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill
2012-06-01
Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.
Image-Guided Rendering with an Evolutionary Algorithm Based on Cloud Model
2018-01-01
The process of creating nonphotorealistic rendering images and animations can be enjoyable if a useful method is involved. We use an evolutionary algorithm to generate painterly styles of images. Given an input image as the reference target, a cloud model-based evolutionary algorithm that will rerender the target image with nonphotorealistic effects is evolved. The resulting animations have an interesting characteristic in which the target slowly emerges from a set of strokes. A number of experiments are performed, as well as visual comparisons, quantitative comparisons, and user studies. The average scores in normalized feature similarity of standard pixel-wise peak signal-to-noise ratio, mean structural similarity, feature similarity, and gradient similarity based metric are 0.486, 0.628, 0.579, and 0.640, respectively. The average scores in normalized aesthetic measures of Benford's law, fractal dimension, global contrast factor, and Shannon's entropy are 0.630, 0.397, 0.418, and 0.708, respectively. Compared with those of similar method, the average score of the proposed method, except peak signal-to-noise ratio, is higher by approximately 10%. The results suggest that the proposed method can generate appealing images and animations with different styles by choosing different strokes, and it would inspire graphic designers who may be interested in computer-based evolutionary art. PMID:29805440
Comprehension of concrete and abstract words in semantic dementia
Jefferies, Elizabeth; Patterson, Karalyn; Jones, Roy W.; Lambon Ralph, Matthew A.
2009-01-01
The vast majority of brain-injured patients with semantic impairment have better comprehension of concrete than abstract words. In contrast, several patients with semantic dementia (SD), who show circumscribed atrophy of the anterior temporal lobes bilaterally, have been reported to show reverse imageability effects, i.e., relative preservation of abstract knowledge. Although these reports largely concern individual patients, some researchers have recently proposed that superior comprehension of abstract concepts is a characteristic feature of SD. This would imply that the anterior temporal lobes are particularly crucial for processing sensory aspects of semantic knowledge, which are associated with concrete not abstract concepts. However, functional neuroimaging studies of healthy participants do not unequivocally predict reverse imageability effects in SD because the temporal poles sometimes show greater activation for more abstract concepts. We examined a case-series of eleven SD patients on a synonym judgement test that orthogonally varied the frequency and imageability of the items. All patients had higher success rates for more imageable as well as more frequent words, suggesting that (a) the anterior temporal lobes underpin semantic knowledge for both concrete and abstract concepts, (b) more imageable items – perhaps due to their richer multimodal representations – are typically more robust in the face of global semantic degradation and (c) reverse imageability effects are not a characteristic feature of SD. PMID:19586212
3D cloud detection and tracking system for solar forecast using multiple sky imagers
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2015-06-23
We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less
Guo, Hao; Zhang, Fan; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
Exploring functional interactions among various brain regions is helpful for understanding the pathological underpinnings of neurological disorders. Brain networks provide an important representation of those functional interactions, and thus are widely applied in the diagnosis and classification of neurodegenerative diseases. Many mental disorders involve a sharp decline in cognitive ability as a major symptom, which can be caused by abnormal connectivity patterns among several brain regions. However, conventional functional connectivity networks are usually constructed based on pairwise correlations among different brain regions. This approach ignores higher-order relationships, and cannot effectively characterize the high-order interactions of many brain regions working together. Recent neuroscience research suggests that higher-order relationships between brain regions are important for brain network analysis. Hyper-networks have been proposed that can effectively represent the interactions among brain regions. However, this method extracts the local properties of brain regions as features, but ignores the global topology information, which affects the evaluation of network topology and reduces the performance of the classifier. This problem can be compensated by a subgraph feature-based method, but it is not sensitive to change in a single brain region. Considering that both of these feature extraction methods result in the loss of information, we propose a novel machine learning classification method that combines multiple features of a hyper-network based on functional magnetic resonance imaging in Alzheimer's disease. The method combines the brain region features and subgraph features, and then uses a multi-kernel SVM for classification. This retains not only the global topological information, but also the sensitivity to change in a single brain region. To certify the proposed method, 28 normal control subjects and 38 Alzheimer's disease patients were selected to participate in an experiment. The proposed method achieved satisfactory classification accuracy, with an average of 91.60%. The abnormal brain regions included the bilateral precuneus, right parahippocampal gyrus\\hippocampus, right posterior cingulate gyrus, and other regions that are known to be important in Alzheimer's disease. Machine learning classification combining multiple features of a hyper-network of functional magnetic resonance imaging data in Alzheimer's disease obtains better classification performance. PMID:29209156
Ross, Nicholas E; Pritchard, Charles J; Rubin, David M; Dusé, Adriano G
2006-05-01
Malaria is a serious global health problem, and rapid, accurate diagnosis is required to control the disease. An image processing algorithm to automate the diagnosis of malaria on thin blood smears is developed. The image classification system is designed to positively identify malaria parasites present in thin blood smears, and differentiate the species of malaria. Images are acquired using a charge-coupled device camera connected to a light microscope. Morphological and novel threshold selection techniques are used to identify erythrocytes (red blood cells) and possible parasites present on microscopic slides. Image features based on colour, texture and the geometry of the cells and parasites are generated, as well as features that make use of a priori knowledge of the classification problem and mimic features used by human technicians. A two-stage tree classifier using backpropogation feedforward neural networks distinguishes between true and false positives, and then diagnoses the species (Plasmodium falciparum, P. vivax, P. ovale or P. malariae) of the infection. Malaria samples obtained from the Department of Clinical Microbiology and Infectious Diseases at the University of the Witwatersrand Medical School are used for training and testing of the system. Infected erythrocytes are positively identified with a sensitivity of 85% and a positive predictive value (PPV) of 81%, which makes the method highly sensitive at diagnosing a complete sample provided many views are analysed. Species were correctly determined for 11 out of 15 samples.
NASA Astrophysics Data System (ADS)
Kim, H. O.; Yeom, J. M.
2014-12-01
Space-based remote sensing in agriculture is particularly relevant to issues such as global climate change, food security, and precision agriculture. Recent satellite missions have opened up new perspectives by offering high spatial resolution, various spectral properties, and fast revisit rates to the same regions. Here, we examine the utility of broadband red-edge spectral information in multispectral satellite image data for classifying paddy rice crops in South Korea. Additionally, we examine how object-based spectral features affect the classification of paddy rice growth stages. For the analysis, two seasons of RapidEye satellite image data were used. The results showed that the broadband red-edge information slightly improved the classification accuracy of the crop condition in heterogeneous paddy rice crop environments, particularly when single-season image data were used. This positive effect appeared to be offset by the multi-temporal image data. Additional texture information brought only a minor improvement or a slight decline, although it is well known to be advantageous for object-based classification in general. We conclude that broadband red-edge information derived from conventional multispectral satellite data has the potential to improve space-based crop monitoring. Because the positive or negative effects of texture features for object-based crop classification could barely be interpreted, the relationships between the textual properties and paddy rice crop parameters at the field scale should be further examined in depth.
Beyond Correlation: Do Color Features Influence Attention in Rainforest?
Frey, Hans-Peter; Wirz, Kerstin; Willenbockel, Verena; Betz, Torsten; Schreiber, Cornell; Troscianko, Tomasz; König, Peter
2011-01-01
Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous. We recorded eye-movements of color-normal and deuteranope human participants freely viewing original and modified rainforest images. Eliminating red–green color information dramatically alters fixation behavior in color-normal participants. Changes in feature correlations and variability over subjects and conditions provide evidence for a causal effect of red–green color-contrast. The effects of blue–yellow contrast are much smaller. However, globally rotating hue in color space in these images reveals a mechanism analyzing color-contrast invariant of a specific axis in color space. Surprisingly, in deuteranope participants we find significantly elevated red–green contrast at fixation points, comparable to color-normal participants. Temporal analysis indicates that this is due to compensatory mechanisms acting on a slower time scale. Taken together, our results suggest that under natural conditions red–green color information contributes to overt attention at a low-level (bottom-up). Nevertheless, the results of the image modifications and deuteranope participants indicate that evaluation of color information is done in a hue-invariant fashion. PMID:21519395
Dey, Susmita; Sarkar, Ripon; Chatterjee, Kabita; Datta, Pallab; Barui, Ananya; Maity, Santi P
2017-04-01
Habitual smokers are known to be at higher risk for developing oral cancer, which is increasing at an alarming rate globally. Conventionally, oral cancer is associated with high mortality rates, although recent reports show the improved survival outcomes by early diagnosis of disease. An effective prediction system which will enable to identify the probability of cancer development amongst the habitual smokers, is thus expected to benefit sizable number of populations. Present work describes a non-invasive, integrated method for early detection of cellular abnormalities based on analysis of different cyto-morphological features of exfoliative oral epithelial cells. Differential interference contrast (DIC) microscopy provides a potential optical tool as this mode provides a pseudo three dimensional (3-D) image with detailed morphological and textural features obtained from noninvasive, label free epithelial cells. For segmentation of DIC images, gradient vector flow snake model active contour process has been adopted. To evaluate cellular abnormalities amongst habitual smokers, the selected morphological and textural features of epithelial cells are compared with the non-smoker (-ve control group) group and clinically diagnosed pre-cancer patients (+ve control group) using support vector machine (SVM) classifier. Accuracy of the developed SVM based classification has been found to be 86% with 80% sensitivity and 89% specificity in classifying the features from the volunteers having smoking habit. Copyright © 2017 Elsevier Ltd. All rights reserved.
Godey, S.; Snieder, R.; Villasenor, A.; Benz, H.M.
2003-01-01
We present phase velocity maps of fundamental mode Rayleigh waves across the North American and Caribbean plates. Our data set consists of 1846 waveforms from 172 events recorded at 91 broad-band stations operating in North America. We compute phase velocity maps in four narrow period bands between 50 and 150 s using a non-linear waveform inversion method that solves for phase velocity perturbations relative to a reference Earth model (PREM). Our results show a strong velocity contrast between high velocities beneath the stable North American craton, and lower velocities in the tectonically active western margin, in agreement with other regional and global surface wave tomography studies. We perform detailed comparisons with global model results, which display good agreement between phase velocity maps in the location and amplitude of the anomalies. However, forward modelling shows that regional maps are more accurate for predicting waveforms. In addition, at long periods, the amplitude of the velocity anomalies imaged in our regional phase velocity maps is three time larger than in global phase velocity models. This amplitude factor is necessary to explain the data accurately, showing that regional models provide a better image of velocity structures. Synthetic tests show that the raypath coverage used in this study enables one to resolve velocity features of the order of 800-1000 km. However, only larger length-scale features are observed in the phase velocity maps. The limitation in resolution of our maps can be attributed to the wave propagation theory used in the inversion. Ray theory does not account for off-great-circle ray propagation effects, such as ray bending or scattering. For wavelengths less than 1000 km, scattering effects are significant and may need to be considered.
Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M
2011-01-01
Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less
Global Interior Robot Localisation by a Colour Content Image Retrieval System
NASA Astrophysics Data System (ADS)
Chaari, A.; Lelandais, S.; Montagne, C.; Ahmed, M. Ben
2007-12-01
We propose a new global localisation approach to determine a coarse position of a mobile robot in structured indoor space using colour-based image retrieval techniques. We use an original method of colour quantisation based on the baker's transformation to extract a two-dimensional colour pallet combining as well space and vicinity-related information as colourimetric aspect of the original image. We conceive several retrieving approaches bringing to a specific similarity measure [InlineEquation not available: see fulltext.] integrating the space organisation of colours in the pallet. The baker's transformation provides a quantisation of the image into a space where colours that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image. Whereas the distance [InlineEquation not available: see fulltext.] provides for partial invariance to translation, sight point small changes, and scale factor. In addition to this study, we developed a hierarchical search module based on the logic classification of images following rooms. This hierarchical module reduces the searching indoor space and ensures an improvement of our system performances. Results are then compared with those brought by colour histograms provided with several similarity measures. In this paper, we focus on colour-based features to describe indoor images. A finalised system must obviously integrate other type of signature like shape and texture.
Dawn- Dusk Auroral Oval Oscillations Associated with High- Speed Solar Wind
NASA Technical Reports Server (NTRS)
Liou, Kan; Sibeck, David G.
2018-01-01
We report evidence of global-scale auroral oval oscillations in the millihertz range, using global auroral images acquired from the Ultraviolet Imager on board the decommissioned Polar satellite and concurrent solar wind measurements. On the basis of two events (15 January 1999 and 6 January 2000) studied, it is found that (1) quasi-periodic auroral oval oscillations (approximately 3 megahertz) can occur when solar wind speeds are high at northward or southward interplanetary magnetic field turning, (2) the oscillation amplitudes range from a few to more than 10 degrees in latitudes, (3) the oscillation frequency is the same for each event irrespective of local time and without any azimuthal phase shift (i.e., propagation), (4) the auroral oscillations occur in phase within both the dawn and dusk sectors but 180 degrees out of phase between the dawn and dusk sectors, and (5) no micropulsations on the ground match the auroral oscillation periods. While solar wind conditions favor the growth of the Kelvin-Helmholtz (K-H) instability on the magnetopause as often suggested, the observed wave characteristics are not consistent with predictions for K-H waves. The in-phase and out-of-phase features found in the dawn-dusk auroral oval oscillations suggest that wiggling motions of the magnetotail associated with fast solar winds might be the direct cause of the global-scale millihertz auroral oval oscillations. Plain Language Summary: We utilize global auroral image data to infer the motion of the magnetosphere and show, for the first time, the entire magnetospheric tail can move east-west in harmony like a windsock flapping in wind. The characteristic period of the flapping motion may be a major source of global long-period ULF (Ultra Low Frequency) waves, adding an extra source of the global mode ULF waves.
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Forensic detection of noise addition in digital images
NASA Astrophysics Data System (ADS)
Cao, Gang; Zhao, Yao; Ni, Rongrong; Ou, Bo; Wang, Yongbin
2014-03-01
We proposed a technique to detect the global addition of noise to a digital image. As an anti-forensics tool, noise addition is typically used to disguise the visual traces of image tampering or to remove the statistical artifacts left behind by other operations. As such, the blind detection of noise addition has become imperative as well as beneficial to authenticate the image content and recover the image processing history, which is the goal of general forensics techniques. Specifically, the special image blocks, including constant and strip ones, are used to construct the features for identifying noise addition manipulation. The influence of noising on blockwise pixel value distribution is formulated and analyzed formally. The methodology of detectability recognition followed by binary decision is proposed to ensure the applicability and reliability of noising detection. Extensive experimental results demonstrate the efficacy of our proposed noising detector.
NASA Technical Reports Server (NTRS)
2005-01-01
3 September 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows polygons enhanced by subliming seasonal frost in the martian south polar region. Polygons similar to these occur in frozen ground at high latitudes on Earth, suggesting that perhaps their presence on Mars is also a sign that there is or once was ice in the shallow subsurface. The circular features are degraded meteor impact craters. Location near: 72.2oS, 310.3oW Image width: width: 3 km (1.9 mi) Illumination from: upper left Season: Southern SpringNASA Technical Reports Server (NTRS)
2006-01-01
21 September 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows cracked, layered plains-forming material in the western part of Utopia Planitia, Mars. Investigators have speculated that ice might be -- or might once have been -- present in the ground, and changes in temperature and the amount of ice over time may have led to the formation of these cracks. But no one is certain just how these features formed. Location near: 45.0oN, 276.1oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern SpringNASA Astrophysics Data System (ADS)
Rzhanov, Y.; Mayer, L.; Fornari, D.; Shank, T.; Humphris, S.; Scheirer, D.; Kinsey, J.; Whitcomb, L.
2003-12-01
The Rosebud hydrothermal vent field was discovered in May 2002 in the Galapagos Rift near 86W during a series of Alvin dives and ABE autonomous vehicle surveys. Vertical-incidence digital imaging using a 3.1 Mpixel digital camera and strobe illumination from altitudes of 3-5m was carried out during the Alvin dives. A complete survey of the Rosebud vent site was carried out on Alvin Dive 3790. Submersible position was determined by post-cruise integration of 1.2 MHz bottom-lock Doppler sonar velocity data logged at 5Hz, integrated with heading and attitude data from a north-seeking fiber-optic gyroscope logged at 10Hz, and initialized with a surveyed-in long-baseline transponder navigation system providing geodetic position fixes at 15s intervals. The photo-mosaicing process consisted of three main stages: pre-processing, pair-wise image co-registration, and global alignment. Excellent image quality allowed us to avoid lens distortion correction, so images only underwent histogram equalization. Pair-wise co-registration of sequential frames was done partially automatically (where overlap exceeded 70 percent we employed a frequency-domain based technique), and partially manually (when overlap did not exceed 15 percent and manual feature extraction was the only way to find transformations relating the frames). Partial mosaics allowed us to determine which non-sequential frames had substantial overlap, and the corresponding transformations were found via feature extraction. Global alignment of the images consisted of construction of a sparse, nonlinear over-constrained system of equations reflecting positions of the frames in real-world coordinates. This system was solved using least squares, and the solution provided globally optimal positions of the frames in the overall mosaic. Over 700 images were mosaiced resulting in resolution of ~3 mm per pixel. The mosaiced area covers approximately 50 m x 60 m and clearly shows several biological zonations and distribution of lava flow morphologies, including what is interpreted as the contact between older lobate lava and the young sheet flow that hosts Rosebud vent communities. Recruitment of tubeworms, mussels, and clams is actively occurring at more than five locations oriented on a NE-SW trend where vent emissions occur through small cracks in the sheet flow. Large-scale views of seafloor hydrothermal vent sites, such as the one produced for Rosebud, are critical to properly understanding spatial relationships between hydrothermal biological communities, sites of focused and diffuse fluid flow, and the complex array of volcanic and tectonic features at mid-ocean ridge crests. These high-resolution perspectives are also critical to time-series studies where quantitative documentation of changes can be related to variations in hydrothermal, magmatic and tectonic processes.
Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen
2014-04-01
In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.
Cluster Method Analysis of K. S. C. Image
NASA Technical Reports Server (NTRS)
Rodriguez, Joe, Jr.; Desai, M.
1997-01-01
Information obtained from satellite-based systems has moved to the forefront as a method in the identification of many land cover types. Identification of different land features through remote sensing is an effective tool for regional and global assessment of geometric characteristics. Classification data acquired from remote sensing images have a wide variety of applications. In particular, analysis of remote sensing images have special applications in the classification of various types of vegetation. Results obtained from classification studies of a particular area or region serve towards a greater understanding of what parameters (ecological, temporal, etc.) affect the region being analyzed. In this paper, we make a distinction between both types of classification approaches although, focus is given to the unsupervised classification method using 1987 Thematic Mapped (TM) images of Kennedy Space Center.
Online Visualization and Analysis of Global Half-Hourly Infrared Satellite Data
NASA Technical Reports Server (NTRS)
Liu, Zhong; Ostrenga, Dana; Leptoukh, Gregory
2011-01-01
nfrared (IR) images (approximately 11-micron channel) recorded by satellite sensors have been widely used in weather forecasting, research, and classroom education since the Nimbus program. Unlike visible images, IR imagery can reveal cloud features without sunlight illumination; therefore, they can be used to monitor weather phenomena day and night. With geostationary satellites deployed around the globe, it is possible to monitor weather events 24/7 at a temporal resolution that polar-orbiting satellites cannot achieve at the present time. When IR data from multiple geostationary satellites are merged to form a single product--also known as a merged product--it allows for observing weather on a global scale. Its high temporal resolution (e.g., every half hour) also makes it an ideal ancillary dataset for supporting other satellite missions, such as the Tropical Rainfall Measuring Mission (TRMM), etc., by providing additional background information about weather system evolution.
On analyzing colour constancy approach for improving SURF detector performance
NASA Astrophysics Data System (ADS)
Zulkiey, Mohd Asyraf; Zaki, Wan Mimi Diyana Wan; Hussain, Aini; Mustafa, Mohd. Marzuki
2012-04-01
Robust key point detector plays a crucial role in obtaining a good tracking feature. The main challenge in outdoor tracking is the illumination change due to various reasons such as weather fluctuation and occlusion. This paper approaches the illumination change problem by transforming the input image through colour constancy algorithm before applying the SURF detector. Masked grey world approach is chosen because of its ability to perform well under local as well as global illumination change. Every image is transformed to imitate the canonical illuminant and Gaussian distribution is used to model the global change. The simulation results show that the average number of detected key points have increased by 69.92%. Moreover, the average of improved performance cases far out weight the degradation case where the former is improved by 215.23%. The approach is suitable for tracking implementation where sudden illumination occurs frequently and robust key point detection is needed.
Columbus State University Global Observation and Outreach for the 2012 Transit of Venus
NASA Astrophysics Data System (ADS)
Perry, Matthew; McCarty, C.; Bartow, M.; Hood, J. C.; Lodder, K.; Johnson, M.; Cruzen, S. T.; Williams, R. N.
2013-01-01
Faculty, staff and students from Columbus State University’s (CSU’s) Coca-Cola Space Science Center presented a webcast of the 2012 Transit of Venus from three continents to a global audience of 1.4 million unique viewers. Team members imaged the transit with telescopes using white-light, hydrogen-alpha, and calcium filters, from Alice Springs, Australia; the Gobi Desert, Mongolia; Bryce Canyon, UT; and Columbus, GA. Images were webcast live during the transit in partnership with NASA’s Sun-Earth Day program, and Science Center staff members were featured on NASA TV. Local members of the public were brought in for a series of outreach initiatives, in both Georgia and Australia, before and during the transit. The data recorded from the various locations have been archived for use in demonstrating principles such as the historical measurement of the astronomical unit.
Adaptive elastic segmentation of brain MRI via shape-model-guided evolutionary programming.
Pitiot, Alain; Toga, Arthur W; Thompson, Paul M
2002-08-01
This paper presents a fully automated segmentation method for medical images. The goal is to localize and parameterize a variety of types of structure in these images for subsequent quantitative analysis. We propose a new hybrid strategy that combines a general elastic template matching approach and an evolutionary heuristic. The evolutionary algorithm uses prior statistical information about the shape of the target structure to control the behavior of a number of deformable templates. Each template, modeled in the form of a B-spline, is warped in a potential field which is itself dynamically adapted. Such a hybrid scheme proves to be promising: by maintaining a population of templates, we cover a large domain of the solution space under the global guidance of the evolutionary heuristic, and thoroughly explore interesting areas. We address key issues of automated image segmentation systems. The potential fields are initially designed based on the spatial features of the edges in the input image, and are subjected to spatially adaptive diffusion to guarantee the deformation of the template. This also improves its global consistency and convergence speed. The deformation algorithm can modify the internal structure of the templates to allow a better match. We investigate in detail the preprocessing phase that the images undergo before they can be used more effectively in the iterative elastic matching procedure: a texture classifier, trained via linear discriminant analysis of a learning set, is used to enhance the contrast of the target structure with respect to surrounding tissues. We show how these techniques interact within a statistically driven evolutionary scheme to achieve a better tradeoff between template flexibility and sensitivity to noise and outliers. We focus on understanding the features of template matching that are most beneficial in terms of the achieved match. Examples from simulated and real image data are discussed, with considerations of algorithmic efficiency.
Robotic Vision-Based Localization in an Urban Environment
NASA Technical Reports Server (NTRS)
Mchenry, Michael; Cheng, Yang; Matthies
2007-01-01
A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.
Deep Learning in Label-free Cell Classification
Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K.; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram
2016-01-01
Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells. PMID:26975219
Deep Learning in Label-free Cell Classification
NASA Astrophysics Data System (ADS)
Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K.; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram
2016-03-01
Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.
Palm Vein Verification Using Multiple Features and Locality Preserving Projections
Bu, Wei; Wu, Xiangqian; Zhao, Qiushi
2014-01-01
Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person's skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%. PMID:24693230
Palm vein verification using multiple features and locality preserving projections.
Al-Juboori, Ali Mohsin; Bu, Wei; Wu, Xiangqian; Zhao, Qiushi
2014-01-01
Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person's skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%.
3D shape recovery from image focus using gray level co-occurrence matrix
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid
2018-04-01
Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.
A new medical image segmentation model based on fractional order differentiation and level set
NASA Astrophysics Data System (ADS)
Chen, Bo; Huang, Shan; Xie, Feifei; Li, Lihong; Chen, Wensheng; Liang, Zhengrong
2018-03-01
Segmenting medical images is still a challenging task for both traditional local and global methods because the image intensity inhomogeneous. In this paper, two contributions are made: (i) on the one hand, a new hybrid model is proposed for medical image segmentation, which is built based on fractional order differentiation, level set description and curve evolution; and (ii) on the other hand, three popular definitions of Fourier-domain, Grünwald-Letnikov (G-L) and Riemann-Liouville (R-L) fractional order differentiation are investigated and compared through experimental results. Because of the merits of enhancing high frequency features of images and preserving low frequency features of images in a nonlinear manner by the fractional order differentiation definitions, one fractional order differentiation definition is used in our hybrid model to perform segmentation of inhomogeneous images. The proposed hybrid model also integrates fractional order differentiation, fractional order gradient magnitude and difference image information. The widely-used dice similarity coefficient metric is employed to evaluate quantitatively the segmentation results. Firstly, experimental results demonstrated that a slight difference exists among the three expressions of Fourier-domain, G-L, RL fractional order differentiation. This outcome supports our selection of one of the three definitions in our hybrid model. Secondly, further experiments were performed for comparison between our hybrid segmentation model and other existing segmentation models. A noticeable gain was seen by our hybrid model in segmenting intensity inhomogeneous images.
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.
Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu
2015-08-01
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
3D/2D image registration using weighted histogram of gradient directions
NASA Astrophysics Data System (ADS)
Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang
2015-03-01
Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.
NASA Astrophysics Data System (ADS)
Jones, S. I.; Uritsky, V. M.; Davila, J. M.
2017-12-01
In absence of reliable coronal magnetic field measurements, solar physicists have worked for several decades to develop techniques for extrapolating photospheric magnetic field measurements into the solar corona and/or heliosphere. The products of these efforts tend to be very sensitive to variation in the photospheric measurements, such that the uncertainty in the photospheric measurements introduces significant uncertainty into the coronal and heliospheric models needed to predict such things as solar wind speed, IMF polarity at Earth, and CME propagation. Ultimately, the reason for the sensitivity of the model to the boundary conditions is that the model is trying to extact a great deal of information from a relatively small amout of data. We have published in recent years about a new method we are developing to use morphological information gleaned from coronagraph images to constrain models of the global coronal magnetic field. In our approach, we treat the photospheric measurements as approximations and use an optimization algorithm to iteratively find a global coronal model that best matches both the photospheric measurements and quasi-linear features observed in polarization brightness coronagraph images. Here we will summarize the approach we have developed and present recent progress in optimizing PFSS models based on GONG magnetograms and MLSO K-Cor images.
Pedersen, Mangor; Curwood, Evan K; Archer, John S; Abbott, David F; Jackson, Graeme D
2015-11-01
Lennox-Gastaut syndrome, and the similar but less tightly defined Lennox-Gastaut phenotype, describe patients with severe epilepsy, generalized epileptic discharges, and variable intellectual disability. Our previous functional neuroimaging studies suggest that abnormal diffuse association network activity underlies the epileptic discharges of this clinical phenotype. Herein we use a data-driven multivariate approach to determine the spatial changes in local and global networks of patients with severe epilepsy of the Lennox-Gastaut phenotype. We studied 9 adult patients and 14 controls. In 20 min of task-free blood oxygen level-dependent functional magnetic resonance imaging data, two metrics of functional connectivity were studied: Regional homogeneity or local connectivity, a measure of concordance between each voxel to a focal cluster of adjacent voxels; and eigenvector centrality, a global connectivity estimate designed to detect important neural hubs. Multivariate pattern analysis of these data in a machine-learning framework was used to identify spatial features that classified disease subjects. Multivariate pattern analysis was 95.7% accurate in classifying subjects for both local and global connectivity measures (22/23 subjects correctly classified). Maximal discriminating features were the following: increased local connectivity in frontoinsular and intraparietal areas; increased global connectivity in posterior association areas; decreased local connectivity in sensory (visual and auditory) and medial frontal cortices; and decreased global connectivity in the cingulate cortex, striatum, hippocampus, and pons. Using a data-driven analysis method in task-free functional magnetic resonance imaging, we show increased connectivity in critical areas of association cortex and decreased connectivity in primary cortex. This supports previous findings of a critical role for these association cortical regions as a final common pathway in generating the Lennox-Gastaut phenotype. Abnormal function of these areas is likely to be important in explaining the intellectual problems characteristic of this disorder. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.
Minutia Tensor Matrix: A New Strategy for Fingerprint Matching
Fu, Xiang; Feng, Jufu
2015-01-01
Establishing correspondences between two minutia sets is a fundamental issue in fingerprint recognition. This paper proposes a new tensor matching strategy. First, the concept of minutia tensor matrix (simplified as MTM) is proposed. It describes the first-order features and second-order features of a matching pair. In the MTM, the diagonal elements indicate similarities of minutia pairs and non-diagonal elements indicate pairwise compatibilities between minutia pairs. Correct minutia pairs are likely to establish both large similarities and large compatibilities, so they form a dense sub-block. Minutia matching is then formulated as recovering the dense sub-block in the MTM. This is a new tensor matching strategy for fingerprint recognition. Second, as fingerprint images show both local rigidity and global nonlinearity, we design two different kinds of MTMs: local MTM and global MTM. Meanwhile, a two-level matching algorithm is proposed. For local matching level, the local MTM is constructed and a novel local similarity calculation strategy is proposed. It makes full use of local rigidity in fingerprints. For global matching level, the global MTM is constructed to calculate similarities of entire minutia sets. It makes full use of global compatibility in fingerprints. Proposed method has stronger description ability and better robustness to noise and nonlinearity. Experiments conducted on Fingerprint Verification Competition databases (FVC2002 and FVC2004) demonstrate the effectiveness and the efficiency. PMID:25822489
A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.
Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong
2018-01-01
The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.
Automated railroad reconstruction from remote sensing image based on texture filter
NASA Astrophysics Data System (ADS)
Xiao, Jie; Lu, Kaixia
2018-03-01
Techniques of remote sensing have been improved incredibly in recent years and very accurate results and high resolution images can be acquired. There exist possible ways to use such data to reconstruct railroads. In this paper, an automated railroad reconstruction method from remote sensing images based on Gabor filter was proposed. The method is divided in three steps. Firstly, the edge-oriented railroad characteristics (such as line features) in a remote sensing image are detected using Gabor filter. Secondly, two response images with the filtering orientations perpendicular to each other are fused to suppress the noise and acquire a long stripe smooth region of railroads. Thirdly, a set of smooth regions can be extracted by firstly computing global threshold for the previous result image using Otsu's method and then converting it to a binary image based on the previous threshold. This workflow is tested on a set of remote sensing images and was found to deliver very accurate results in a quickly and highly automated manner.
NASA Technical Reports Server (NTRS)
Perez, J. D.; Goldstein, J.; McComas, D. J.; Valek, P.; Fok, Mei-Ching; Hwang, Kyoung-Joo
2016-01-01
A unique view of the trapped particles in the inner magnetosphere provided by energetic neutral atom (ENA) imaging is used to observe the dynamics of the spatial structure and the pitch angle anisotropy on a global scale during the last 6 h of the main phase of a large geomagnetic storm (minimum SYM-H 230 nT) that began on 17 March 2015. Ion flux and pressure anisotropy obtained from Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) ENA images are shown. The ion flux shows two peaks, an inner one at approximately radii 34 RE in the dusk-to-midnight sector and an outer peak at radii 89 RE prior to midnight. The inner peak is relatively stationary during the entire period with some intensification during the final steep decline in SYM-H to its minimum. The outer peak shows the significant temporal variation brightening and dimming and finally disappearing at the end of the main phase. The pressure anisotropy shows the expected perpendicular pitch angles inside of L 6 but shows parallel pitch angles at greater L values. This is interpreted as consistent with pitch angle-dependent drift as modeled in the Tsy05 magnetic field and Comprehensive Inner Magnetosphere-Ionosphere simulations. The TWINS results are compared directly with Radiation Belt Storm Probes Ion Composition Experiment (RBSPICE)-A measurements. Using 15 min snapshots of flux and pressure anisotropy from TWINS along the path of RBSPICE-A during the 6 h focused upon in this study, the essential features displayed in the TWINS global images are supported.
BioMon: A Google Earth Based Continuous Biomass Monitoring System (Demo Paper)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju
2009-01-01
We demonstrate a Google Earth based novel visualization system for continuous monitoring of biomass at regional and global scales. This system is integrated with a back-end spatiotemporal data mining system that continuously detects changes using high temporal resolution MODIS images. In addition to the visualization, we demonstrate novel query features of the system that provides insights into the current conditions of the landscape.
The Along Track Scanning Radiometer (ATSR) for ERS1
NASA Astrophysics Data System (ADS)
Delderfield, J.; Llewellyn-Jones, D. T.; Bernard, R.; de Javel, Y.; Williamson, E. J.
1986-01-01
The ATSR is an infrared imaging radiometer which has been selected to fly aboard the ESA Remote Sensing Satellite No. 1 (ERS1) with the specific objective of accurately determining global Sea Surface Temperature (SST). Novel features, including the technique of 'along track' scanning, a closed Stirling cycle cooler, and the precision on-board blackbodies are described. Instrument subsystems are identified and their design trade-offs discussed.
The Atlases of Vesta derived from Dawn Framing Camera images
NASA Astrophysics Data System (ADS)
Roatsch, T.; Kersten, E.; Matz, K.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.
2013-12-01
The Dawn Framing Camera acquired during its two HAMO (High Altitude Mapping Orbit) phases in 2011 and 2012 about 6,000 clear filter images with a resolution of about 60 m/pixel. We combined these images in a global ortho-rectified mosaic of Vesta (60 m/pixel resolution). Only very small areas near the northern pole were still in darkness and are missing in the mosaic. The Dawn Framing Camera also acquired about 10,000 high-resolution clear filter images (about 20 m/pixel) of Vesta during its Low Altitude Mapping Orbit (LAMO). Unfortunately, the northern part of Vesta was still in darkness during this phase, good illumination (incidence angle < 70°) was only available for 66.8 % of the surface [1]. We used the LAMO images to calculate another global mosaic of Vesta, this time with 20 m/pixel resolution. Both global mosaics were used to produce atlases of Vesta: a HAMO atlas with 15 tiles at a scale of 1:500,000 and a LAMO atlas with 30 tiles at a scale between 1:200,000 and 1:225,180. The nomenclature used in these atlases is based on names and places historically associated with the Roman goddess Vesta, and is compliant with the rules of the IAU. 65 names for geological features were already approved by the IAU, 39 additional names are currently under review. Selected examples of both atlases will be shown in this presentation. Reference: [1]Roatsch, Th., etal., High-resolution Vesta Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images. Planetary and Space Science (2013), http://dx.doi.org/10.1016/j.pss.2013.06.024i
South Polar Cap Erosion and Aprons
NASA Technical Reports Server (NTRS)
2000-01-01
This scene is illuminated by sunlight from the upper left.
While Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) images have shown that the north and south polar cap surfaces are very different from each other, one thing that the two have in common is that they both seem to have been eroded. Erosion in the north appears mostly to come in the form of pits from which ice probably sublimed to vapor and was transported away from the polar cap by wind. Erosion in the south takes on a wider range of possible processes that include collapse, slumping and mass-movement on slopes, and probably sublimation. Among the landforms created by these process on the south polar cap are the 'aprons' that surround mesas and buttes of remnant layers such as the two almost triangular features in the lower quarter of this image. The upper slopes of the two triangular features show a stair-stepped pattern that suggest these hills are layered.This image shows part of the south polar residual cap near 86.9oS, 78.5oW, and covers an area approximately 1.2 by 1.0 kilometers (0.7 x 0.6 miles) in size. The image has a resolution of 2.2 meters per pixel. The picture was taken on September 11, 1999.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.2017-12-01
This image taken by NASA's Dawn spacecraft shows Duginavi Crater, a large (96 miles, 155 kilometers in diameter) crater on Ceres. Duginavi's degraded rim barely stands out in this picture, which indicates this feature is very old. There are several factors that alter and eventually erase the shapes of geological features on bodies that do not have an atmosphere. These include gravity, which is responsible for landslides and scarps. The formation of newer craters, and the material that gets ejected in the process, has smoothed over craters such as Duginavi. Duginavi hosts the small Oxo Crater, recognizable by its bright rim and ejecta. Oxo is the first site at which ice was discovered on Ceres. Duginavi is named for an agriculture god of the Kogi people of northern Colombia. Oxo bears the name of the god of agriculture in Afro-Brazilian beliefs of Yoruba derivation. These features can be found on the global map of Ceres. Dawn took this image on October 8, 2015, from its high-altitude mapping orbit, at a distance of about 915 miles (1,470 kilometers) above the surface. It has a resolution of 450 feet (140 meters) per pixel. The center coordinates of this image are 39 degrees north latitude, 8 degrees east longitude. https://photojournal.jpl.nasa.gov/catalog/PIA21912
Looking into the future: An inward bias in aesthetic experience driven only by gaze cues.
Chen, Yi-Chia; Colombatto, Clara; Scholl, Brian J
2018-07-01
The inward bias is an especially powerful principle of aesthetic experience: In framed images (e.g. photographs), we prefer peripheral figures that face inward (vs. outward). Why does this bias exist? Since agents tend to act in the direction in which they are facing, one intriguing possibility is that the inward bias reflects a preference to view scenes from a perspective that will allow us to witness those predicted future actions. This account has been difficult to test with previous displays, in which facing direction is often confounded with either global shape profiles or the relative locations of salient features (since e.g. someone's face is generally more visually interesting than the back of their head). But here we demonstrate a robust inward bias in aesthetic judgment driven by a cue that is socially powerful but visually subtle: averted gaze. Subjects adjusted the positions of people in images to maximize the images' aesthetic appeal. People with direct gaze were not placed preferentially in particular regions, but people with averted gaze were reliably placed so that they appeared to be looking inward. This demonstrates that the inward bias can arise from visually subtle features, when those features signal how future events may unfold. Copyright © 2018. Published by Elsevier B.V.
Precision Topography of Pluvial Features in Nevada as Analogs for Possible Pluvial Landforms on Mars
NASA Astrophysics Data System (ADS)
Zimbelman, J. R.; Garry, W. B.; Irwin, R. P.
2009-12-01
Topographic measurements with better than 2 cm horizontal and 4 cm vertical precision were obtained for pluvial features in Nevada using a Trimble R8 Differential Global Positioning System (DGPS), making use of both real-time kinematic and post-processed kinematic techniques. We collected ten transects across shorelines in the southern end of Surprise Valley, near the California border in NW Nevada, on April 15-17, 2008, plus five transects of shorelines and eight transects of a wavecut scarp in Long Valley, near the Utah border in NE Nevada, on May 5-7, 2009. Each transect consists of topographic points keyed to field notes and photographs. In Surprise Valley, the highstand shoreline was noted at 1533.4 m elevation in 8 of the 10 transects, and several prominent intermediate shorelines could be correlated between two or more transects. In Long Valley, the well preserved highstand shoreline elevation of 1908.7 m correlated (within 0.6 m) to the base of the wavecut scarp along a horizontal distance of 1.2 km. These results demonstrate that adherence to a geopotential elevation level is one of the strongest indicators that a possible shoreline feature is the result of pluvial processes, and that elevation levels of features can be clearly detected and documented with precise topographic measurements. The High Resolution Imaging Science Experiment (HiRISE) is returning images of Mars that show potential shoreline features in remarkable detail (e.g., image PSP_009998_2165, 32 cm/pixel, showing a possible shoreline in NW Arabia). Our results from studying shorelines in Nevada will provide a basis for evaluating the plausibility of possible shoreline features on Mars, the implications of which are significant for the overall history of Mars.
SIFT Meets CNN: A Decade Survey of Instance Retrieval.
Zheng, Liang; Yang, Yi; Tian, Qi
2018-05-01
In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.
Automatic streak endpoint localization from the cornerness metric
NASA Astrophysics Data System (ADS)
Sease, Brad; Flewelling, Brien; Black, Jonathan
2017-05-01
Streaked point sources are a common occurrence when imaging unresolved space objects from both ground- and space-based platforms. Effective localization of streak endpoints is a key component of traditional techniques in space situational awareness related to orbit estimation and attitude determination. To further that goal, this paper derives a general detection and localization method for streak endpoints based on the cornerness metric. Corners detection involves searching an image for strong bi-directional gradients. These locations typically correspond to robust structural features in an image. In the case of unresolved imagery, regions with a high cornerness score correspond directly to the endpoints of streaks. This paper explores three approaches for global extraction of streak endpoints and applies them to an attitude and rate estimation routine.
Lane marking detection based on waveform analysis and CNN
NASA Astrophysics Data System (ADS)
Ye, Yang Yang; Chen, Hou Jin; Hao, Xiao Li
2017-06-01
Lane markings detection is a very important part of the ADAS to avoid traffic accidents. In order to obtain accurate lane markings, in this work, a novel and efficient algorithm is proposed, which analyses the waveform generated from the road image after inverse perspective mapping (IPM). The algorithm includes two main stages: the first stage uses an image preprocessing including a CNN to reduce the background and enhance the lane markings. The second stage obtains the waveform of the road image and analyzes the waveform to get lanes. The contribution of this work is that we introduce local and global features of the waveform to detect the lane markings. The results indicate the proposed method is robust in detecting and fitting the lane markings.
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Global Average Brightness Temperature for April 2003
2003-06-02
This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image. http://photojournal.jpl.nasa.gov/catalog/PIA00427
Characterization of ASTER GDEM Elevation Data over Vegetated Area Compared with Lidar Data
NASA Technical Reports Server (NTRS)
Ni, Wenjian; Sun, Guoqing; Ranson, Kenneth J.
2013-01-01
Current researches based on areal or spaceborne stereo images with very high resolutions (less than 1 meter) have demonstrated that it is possible to derive vegetation height from stereo images. The second version of the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM) is a state-of-the-art global elevation data-set developed by stereo images. However, the resolution of ASTER stereo images (15 meters) is much coarser than areal stereo images, and the ASTER GDEM is compiled products from stereo images acquired over 10 years. The forest disturbances as well as forest growth are inevitable in 10 years time span. In this study, the features of ASTER GDEM over vegetated areas under both flat and mountainous conditions were investigated by comparisons with lidar data. The factors possibly affecting the extraction of vegetation canopy height considered include (1) co-registration of DEMs; (2) spatial resolution of digital elevation models (DEMs); (3) spatial vegetation structure; and (4) terrain slope. The results show that accurate co-registration between ASTER GDEM and the National Elevation Dataset (NED) is necessary over mountainous areas. The correlation between ASTER GDEM minus NED and vegetation canopy height is improved from 0.328 to 0.43 by degrading resolutions from 1 arc-second to 5 arc-seconds and further improved to 0.6 if only homogenous vegetated areas were considered.
NASA Technical Reports Server (NTRS)
Pearl, J. C.; Sinton, W. M.
1982-01-01
The size and temperature, morphology and distribution, variability, possible absorption features, and processes of hot spots on Io are discussed, and an estimate of the global heat flux is made. Size and temperature information is deconvolved to obtain equivalent radius and temperature of hot spots, and simultaneously obtained Voyager thermal and imaging data is used to match hot sources with specific geologic features. In addition to their thermal output, it is possible that hot spots are also characterized by production of various gases and particulate materials; the spectral signature of SO2 has been seen. Origins for relatively stable, low temperature sources, transient high temperature sources, and relatively stable, high-tmperature sources are discussed.
Imaging three-dimensional innervation zone distribution in muscles from M-wave recordings
NASA Astrophysics Data System (ADS)
Zhang, Chuan; Peng, Yun; Liu, Yang; Li, Sheng; Zhou, Ping; Zev Rymer, William; Zhang, Yingchun
2017-06-01
Objective. To localize neuromuscular junctions in skeletal muscles in vivo which is of great importance in understanding, diagnosing and managing of neuromuscular disorders. Approach. A three-dimensional global innervation zone imaging technique was developed to characterize the global distribution of innervation zones, as an indication of the location and features of neuromuscular junctions, using electrically evoked high-density surface electromyogram recordings. Main results. The performance of the technique was evaluated in the biceps brachii of six intact human subjects. The geometric centers of the distributions of the reconstructed innervation zones were determined with a mean distance of 9.4 ± 1.4 cm from the reference plane, situated at the medial epicondyle of the humerus. A mean depth was calculated as 1.5 ± 0.3 cm from the geometric centers to the closed points over the skin. The results are consistent with those reported in previous histology studies. It was also found that the volumes and distributions of the reconstructed innervation zones changed as the stimulation intensities increased until the supramaximal muscle response was achieved. Significance. Results have demonstrated the high performance of the proposed imaging technique in noninvasively imaging global distributions of the innervation zones in the three-dimensional muscle space in vivo, and the feasibility of its clinical applications, such as guiding botulinum toxin injections in spasticity management, or in early diagnosis of neurodegenerative progression of amyotrophic lateral sclerosis.
NASA Technical Reports Server (NTRS)
2006-01-01
28 January 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a summer scene from the south polar region of Mars. The circular feature in the northeast (upper right) corner of the image is an old meteor impact crater that has been partially filled and buried. The cone-shaped hill that occurs within the crater on its east (right) side is a remnant of material that once covered and completely buried the crater. Perhaps beneath the surfaces in the rest of the image there are other craters that have been filled and buried such that we cannot know, from an image, that they ever existed. The theme of filled, buried, and exhumed craters is one that repeats itself -- over and over again -- all over Mars. Location near: 80.3oS, 286.1oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern SummerMars Orbiter Camera Views the 'Face on Mars' - Comparison with Viking
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.In this comparison, the best Viking image has been enlarged to 3.3 times its original resolution, and the MOC image has been decreased by a similar 3.3 times, creating images of roughly the same size. In addition, the MOC images have been geometrically transformed to a more overhead projection (different from the mercator map projection of PIA01440 & 1441) for ease of comparison with the Viking image. The left image is a portion of Viking Orbiter 1 frame 070A13, the middle image is a portion of MOC frame shown normally, and the right image is the same MOC frame but with the brightness inverted to simulate the approximate lighting conditions of the Viking image.Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps: The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking. The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales. The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth. A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.See PIA01440-1442 for additional processing steps. Also see PIA01236 for the raw image.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.NASA Astrophysics Data System (ADS)
Zani, Hiran; Assine, Mario Luis; McGlue, Michael Matthew
2012-08-01
Traditional Shuttle Radar Topography Mission (SRTM) topographic datasets hold limited value in the geomorphic analysis of low-relief terrains. To address this shortcoming, this paper presents a series of techniques designed to enhance digital elevation models (DEMs) of environments dominated by low-amplitude landforms, such as a fluvial megafan system. These techniques were validated through the study of a wide depositional tract composed of several megafans located within the Brazilian Pantanal. The Taquari megafan is the most remarkable of these features, covering an area of approximately 49,000 km2. To enhance the SRTM-DEM, the megafan global topography was calculated and found to be accurately represented by a second order polynomial. Simple subtraction of the global topography from altitude produced a new DEM product, which greatly enhanced low amplitude landforms within the Taquari megafan. A field campaign and optical satellite images were used to ground-truth features on the enhanced DEM, which consisted of both depositional (constructional) and erosional features. The results demonstrate that depositional lobes are the dominant landforms on the megafan. A model linking baselevel change, avulsion, clastic sedimentation, and erosion is proposed to explain the microtopographic features on the Taquari megafan surface. The study confirms the potential promise of enhanced DEMs for geomorphological research in alluvial settings.
Neural-network classifiers for automatic real-world aerial image recognition
NASA Astrophysics Data System (ADS)
Greenberg, Shlomo; Guterman, Hugo
1996-08-01
We describe the application of the multilayer perceptron (MLP) network and a version of the adaptive resonance theory version 2-A (ART 2-A) network to the problem of automatic aerial image recognition (AAIR). The classification of aerial images, independent of their positions and orientations, is required for automatic tracking and target recognition. Invariance is achieved by the use of different invariant feature spaces in combination with supervised and unsupervised neural networks. The performance of neural-network-based classifiers in conjunction with several types of invariant AAIR global features, such as the Fourier-transform space, Zernike moments, central moments, and polar transforms, are examined. The advantages of this approach are discussed. The performance of the MLP network is compared with that of a classical correlator. The MLP neural-network correlator outperformed the binary phase-only filter (BPOF) correlator. It was found that the ART 2-A distinguished itself with its speed and its low number of required training vectors. However, only the MLP classifier was able to deal with a combination of shift and rotation geometric distortions.
Neural-network classifiers for automatic real-world aerial image recognition.
Greenberg, S; Guterman, H
1996-08-10
We describe the application of the multilayer perceptron (MLP) network and a version of the adaptive resonance theory version 2-A (ART 2-A) network to the problem of automatic aerial image recognition (AAIR). The classification of aerial images, independent of their positions and orientations, is required for automatic tracking and target recognition. Invariance is achieved by the use of different invariant feature spaces in combination with supervised and unsupervised neural networks. The performance of neural-network-based classifiers in conjunction with several types of invariant AAIR global features, such as the Fourier-transform space, Zernike moments, central moments, and polar transforms, are examined. The advantages of this approach are discussed. The performance of the MLP network is compared with that of a classical correlator. The MLP neural-network correlator outperformed the binary phase-only filter (BPOF) correlator. It was found that the ART 2-A distinguished itself with its speed and its low number of required training vectors. However, only the MLP classifier was able to deal with a combination of shift and rotation geometric distortions.
Resourcesat-1: A global multi-observation mission for resources monitoring
NASA Astrophysics Data System (ADS)
Seshadri, K. S. V.; Rao, Mukund; Jayaraman, V.; Thyagarajan, K.; Sridhara Murthi, K. R.
2005-07-01
With an array of Indian Remote Sensing Satellites (IRS), a wide variety of national applications have been developed as an inter-agency effort over the past 20 years. Now, the capacity of the programme has been extended into the global arena and IRS is providing operational data services to the global user community. The recently launched IRS satellite, Resourcesat-1, was placed into perfect orbit by India's PSLV and is providing valuable imaging services. Resourcesat-1 is actually like 3 satellites "rolled" into one, imaging a wide field of 710 km area at ˜55 m resolution in multispectral bands from the AWiFS, 23 m resolution in a systematic 142 km swath from four bands of the LISS-3 and the 5.8 m multi-spectral images from the most advanced sensor—LISS-4. Yet another aspect of Resourcesat-1 is it that it marks a "watershed" in terms of a quantum jump in technological capability that India has achieved compared to past missions. The mission has many newer features—the advanced imaging sensors, the more precise attitude and orbit determination systems, the satellite positioning system onboard, the mass storage devices and many other features. This mission has led IRS into a new technological era, and when combined with the technological capability of the forthcoming Cartosat missions, India would have developed technologies that will take us into the new generation of EO satellites for the coming years. This paper provides a detailed description of the Resourcesat-1 mission. From the applications point of view, Resourcesat-1 will open up new avenues for environmental monitoring and resources management—especially for vegetation assessment and disaster management support. The monitoring capability of this mission is also extremely important for a number of applications. The mission has global imaging and servicing capabilities and could be received through the Antrix-Space Imaging network, which markets Resourcesat-1 data worldwide. This paper also describes the applications potentials and global capabilities of the mission. Resourcesat-1 will have continuity and after that a new generation system will provide enhanced and more unique imaging services. Actually, India has a 25 years strategy for EO and a perspective of the same is also described in this paper.
DTM analysis and displacement estimates of a major mercurian lobate scarp.
NASA Astrophysics Data System (ADS)
Ferrari, S.; Massironi, M.; Pozzobon, R.; Castelluccio, A.; Di Achille, G.; Cremonese, G.
2012-04-01
During its second and third flybys, the MErcury Surface Space ENvironment GEochemistry and Ranging (MESSENGER) mission imaged a new large and well-preserved basin called Rembrandt Basin (Watters et al., 2009, Science) in Mercury's southern hemisphere. This basin is a 715-km-diameter impact feature which displays a distinct hummocky rim broken up by the presence of several large impact craters. Its interior is partially filled by volcanic materials, that extend up to the southern, eastern and part of the western rims, and is crossed by the 1000-km long homonymous lobate scarp. In attempt to reveal the basin-scarp complex evolution, we used MESSENGER Mercury Dual Imaging System (MDIS) mosaics to map the basin geological domains - inferring where possible their stratigraphic relationships, and fix the tectonic patterns. In contrast to other well-seen basins, Rembrandt displays evidence of global-scale in addition to basin-localized deformation that in some cases may be controlled by rheological layering within the crust. Extensional features are essentially radial and confined to the inner part, displaying one or more uplifts episodes that follow the impact. The widespread wrinkle ridges form a polygonal pattern of radial and concentric features on the whole floor, probably due to one or more near-surface compressional stages. On the other hand, Rembrandt scarp seems to be clearly unrelated to the basin formation stage and rather belonging to a global process like cooling contraction and/or tidal despinning of the planet. The main compressional phase responsible of the overall scarp build-up was followed by minor compressional structures detected within younger craters in turn cutting the main scarp. This suggests a prolonged slowing down phase of a global tectonic process. The whole feature displays an unusual transpressional nature for a common lobate scarp. Then we performed a structural and kinematic analysis subdividing the main feature into three branches: the southern one with clear evidences of a right-lateral strike slip movement acting together with an inverse kinematics, the northern one with the left-lateral component recorded on a prominent pop-up structure, and the central sector without any evidence of strike slip movements. The Digital Terrain Models of Preusker et al. (2011, PSS) help us to reconstruct the deformation, assessing the displacements along the three branches and considering different fault attitudes in depth.
Wang, Lei; Pedersen, Peder C; Agu, Emmanuel; Strong, Diane M; Tulu, Bengisu
2017-09-01
The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.
NASA Astrophysics Data System (ADS)
Zou, Tianhao; Zuo, Zhengrong
2018-02-01
Target detection is a very important and basic problem of computer vision and image processing. The most often case we meet in real world is a detection task for a moving-small target on moving platform. The commonly used methods, such as Registration-based suppression, can hardly achieve a desired result. To crack this hard nut, we introduce a Global-local registration based suppression method. Differ from the traditional ones, the proposed Global-local Registration Strategy consider both the global consistency and the local diversity of the background, obtain a better performance than normal background suppression methods. In this paper, we first discussed the features about the small-moving target detection on unstable platform. Then we introduced a new strategy and conducted an experiment to confirm its noisy stability. In the end, we confirmed the background suppression method based on global-local registration strategy has a better perform in moving target detection on moving platform.
NASA Technical Reports Server (NTRS)
2004-01-01
2 August 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a circular mesa and layered materials that are partially-exposed from beneath a thick, dark mantle in the Aureum Chaos region of Mars. The features are part of a much larger circular form (bigger than the image shown here) that marks the location of a crater that was filled with light-toned sedimentary rock, buried, and then later re-exposed when the upper crust of Mars broke apart in this region to form buttes and mesas of 'chaotic terrain.' The circular mesa in this image might also be the location of a formerly filled and buried crater. This image is located near 4.0oS, 26.9oW. It covers an area about 3 km (1.9 mi) across; sunlight illuminates the scene from the left/upper left.NASA Astrophysics Data System (ADS)
Bramhe, V. S.; Ghosh, S. K.; Garg, P. K.
2018-04-01
With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust and abstract is a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent more abstract aspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31 % and 82.86 % overall accuracies respectively. Inception-v3 and VGGNet gives 89.43 % of overall accuracy using fine-tuned VGGNet and 92.10 % when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.
The Europa Imaging System (EIS): Investigating Europa's geology, ice shell, and current activity
NASA Astrophysics Data System (ADS)
Turtle, Elizabeth; Thomas, Nicolas; Fletcher, Leigh; Hayes, Alexander; Ernst, Carolyn; Collins, Geoffrey; Hansen, Candice; Kirk, Randolph L.; Nimmo, Francis; McEwen, Alfred; Hurford, Terry; Barr Mlinar, Amy; Quick, Lynnae; Patterson, Wes; Soderblom, Jason
2016-07-01
NASA's Europa Mission, planned for launch in 2022, will perform more than 40 flybys of Europa with altitudes at closest approach as low as 25 km. The instrument payload includes the Europa Imaging System (EIS), a camera suite designed to transform our understanding of Europa through global decameter-scale coverage, topographic and color mapping, and unprecedented sub- meter-scale imaging. EIS combines narrow-angle and wide-angle cameras to address these science goals: • Constrain the formation processes of surface features by characterizing endogenic geologic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure and potential near-surface water. • Search for evidence of recent or current activity, including potential plumes. • Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar. • Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. EIS Narrow-angle Camera (NAC): The NAC, with a 2.3°° x 1.2°° field of view (FOV) and a 10-μμrad instantaneous FOV (IFOV), achieves 0.5-m pixel scale over a 2-km-wide swath from 50-km altitude. A 2-axis gimbal enables independent targeting, allowing very high-resolution stereo imaging to generate digital topographic models (DTMs) with 4-m spatial scale and 0.5-m vertical precision over the 2-km swath from 50-km altitude. The gimbal also makes near-global (>95%) mapping of Europa possible at ≤50-m pixel scale, as well as regional stereo imaging. The NAC will also perform high-phase-angle observations to search for potential plumes. EIS Wide-angle Camera (WAC): The WAC has a 48°° x 24°° FOV, with a 218-μμrad IFOV, and is designed to acquire pushbroom stereo swaths along flyby ground-tracks. From an altitude of 50 km, the WAC achieves 11-m pixel scale over a 44-km-wide swath, generating DTMs with 32-m spatial scale and 4-m vertical precision. These data also support characterization of surface clutter for interpretation of radar deep and shallow sounding modes. Detectors: The cameras have identical rapid-readout, radiation-hard 4k x 2k CMOS detectors and can image in both pushbroom and framing modes. Color observations are acquired by pushbroom imaging using six broadband filters (~300-1050 nm), allowing mapping of surface units for correlation with geologic structures, topography, and compositional units from other instruments.
NASA Astrophysics Data System (ADS)
Rack, F.; Diamond, J.; Levy, R.; Berg, M.; Dahlman, L.; Jackson, J.
2006-12-01
IPY: Engaging Antarctica is an informal science education project designed to increase the general public's understanding of scientific research conducted in Antarctica. The project focuses specifically on the multi- national, NSF-funded Antarctic Drilling Project (ANDRILL). The ANDRILL project is the newest geological drilling program in an ongoing effort to recover stratigraphic records from Antarctica. ANDRILL's primary objectives are to investigate Antarctica's role in global environmental change over the past 65 million years and to better understand its future response to global changes. Additionally, through ANDRILL's Research Immersion for Science Educators program (ARISE), 12 science educators from four countries will work on science research teams in Antarctica and produce educational materials that feature Antarctic geoscience. The Engaging Antarctica project will produce both a NOVA television documentary and an innovative informal learning exhibit. The documentary, Antarctica's Icy Secrets, will provide a geological perspective on how Antarctica continues to play a major role in affecting global climate by altering ocean currents and sea levels. The learning exhibit, one that blends standards- and inquiry-based learning with the latest information technologies, is coined the Flexhibit. The Engaging Antarctica Flexhibit will provide a digital package of high resolution images for banners as well as learning activities and ideas for exhibit stations that can be implemented by youth groups. Flexhibit images will feature ANDRILL scientists at work, and audio files, available as podcasts, will tell scientists' stories in their own words, speaking directly to the public about the joys and challenges of Antarctic geological research.
ECG Identification System Using Neural Network with Global and Local Features
ERIC Educational Resources Information Center
Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles
2016-01-01
This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…
The structure and rainfall features of Tropical Cyclone Rammasun (2002)
NASA Astrophysics Data System (ADS)
Ma, Leiming; Duan, Yihong; Zhu, Yongti
2004-12-01
Tropical Rainfall Measuring Mission (TRMM) data [TRMM Microwave Imager/Precipitation Radar/Visible and Infrared Scanner (TMI/PR/VIRS)] and a numerical model are used to investigate the structure and rainfall features of Tropical Cyclone (TC) Rammasun (2002). Based on the analysis of TRMM data, which are diagnosed together with NCEP/AVN [Aviation (global model)] analysis data, some typical features of TC structure and rainfall are preliminary discovered. Since the limitations of TRMM data are considered for their time resolution and coverage, the world observed by TRMM at several moments cannot be taken as the representation of the whole period of the TC lifecycle, therefore the picture should be reproduced by a numerical model of high quality. To better understand the structure and rainfall features of TC Rammasun, a numerical simulation is carried out with mesoscale model MM5 in which the validations have been made with the data of TRMM and NCEP/AVN analysis.
Finger vein recognition with personalized feature selection.
Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Meng, Xianjing
2013-08-22
Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG). For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG.
Finger Vein Recognition with Personalized Feature Selection
Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Meng, Xianjing
2013-01-01
Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG). For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG. PMID:23974154
NASA Technical Reports Server (NTRS)
2004-01-01
30 October 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows shallow tributary valleys in the Ismenius Lacus fretted terrain region of northern Arabia Terra. These valleys exhibit a variety of typical fretted terrain valley wall and floor textures, including a lineated, pitted material somewhat reminiscent of the surface of a brain. Origins for these features are still being debated within the Mars science community; there are no clear analogs to these landforms on Earth. This image is located near 39.9oN, 332.1oW. The picture covers an area about 3 km (1.9 mi) wide. Sunlight illuminates the scene from the lower left.NASA Technical Reports Server (NTRS)
2005-01-01
1 September 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows an impact crater cut by troughs which formed after the crater formed. The crater and troughs have large windblown ripples on their floors. The ripples, troughs, craters, and other surfaces in this scene have all been mantled by dust. Dark streaks on slopes indicate areas where avalanches of dry dust have occurred. These features are located on Sacra Mena, a large mesa in the Kasei Valles region. Location near: 25.4oN, 66.8oW Image width: width: 3 km (1.9 mi) Illumination from: lower left Season: Northern AutumnAccurate registration of temporal CT images for pulmonary nodules detection
NASA Astrophysics Data System (ADS)
Yan, Jichao; Jiang, Luan; Li, Qiang
2017-02-01
Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.
Compact optical processor for Hough and frequency domain features
NASA Astrophysics Data System (ADS)
Ott, Peter
1996-11-01
Shape recognition is necessary in a broad band of applications such as traffic sign or work piece recognition. It requires not only neighborhood processing of the input image pixels but global interconnection of them. The Hough transform (HT) performs such a global operation and it is well suited in the preprocessing stage of a shape recognition system. Translation invariant features can be easily calculated form the Hough domain. We have implemented on the computer a neural network shape recognition system which contains a HT, a feature extraction, and a classification layer. The advantage of this approach is that the total system can be optimized with well-known learning techniques and that it can explore the parallelism of the algorithms. However, the HT is a time consuming operation. Parallel, optical processing is therefore advantageous. Several systems have been proposed, based on space multiplexing with arrays of holograms and CGH's or time multiplexing with acousto-optic processors or by image rotation with incoherent and coherent astigmatic optical processors. We took up the last mentioned approach because 2D array detectors are read out line by line, so a 2D detector can achieve the same speed and is easier to implement. Coherent processing can allow the implementation of tilers in the frequency domain. Features based on wedge/ring, Gabor, or wavelet filters have been proven to show good discrimination capabilities for texture and shape recognition. The astigmatic lens system which is derived form the mathematical formulation of the HT is long and contains a non-standard, astigmatic element. By methods of lens transformation s for coherent applications we map the original design to a shorter lens with a smaller number of well separated standard elements and with the same coherent system response. The final lens design still contains the frequency plane for filtering and ray-tracing shows diffraction limited performance. Image rotation can be done optically by a rotating prism. We realize it on a fast FLC- SLM of our lab as input device. The filters can be implemented on the same type of SLM with 128 by 128 square pixels of size, resulting in a total length of the lens of less than 50cm.
NASA Astrophysics Data System (ADS)
Gandomkar, Ziba; Brennan, Patrick C.; Mello-Thoms, Claudia
2017-03-01
Mitotic count is helpful in determining the aggressiveness of breast cancer. In previous studies, it was shown that the agreement among pathologists for grading mitotic index is fairly modest, as mitoses have a large variety of appearances and they could be mistaken for other similar objects. In this study, we determined local and contextual features that differ significantly between easily identifiable mitoses and challenging ones. The images were obtained from the Mitosis-Atypia 2014 challenge. In total, the dataset contained 453 mitotic figures. Two pathologists annotated each mitotic figure. In case of disagreement, an opinion from a third pathologist was requested. The mitoses were grouped into three categories, those recognized as "a true mitosis" by both pathologists ,those labelled as "a true mitosis" by only one of the first two readers and also the third pathologist, and those annotated as "probably a mitosis" by all readers or the majority of them. After color unmixing, the mitoses were segmented from H channel. Shape-based features along with intensity-based and textural features were extracted from H-channel, blue ratio channel and five different color spaces. Holistic features describing each image were also considered. The Kruskal-Wallis H test was used to identify significantly different features. Multiple comparisons were done using the rank-based version of Tukey-Kramer test. The results indicated that there are local and global features which differ significantly among different groups. In addition, variations between mitoses in different groups were captured in the features from HSL and LCH color space more than other ones.
Using Multispectral False Color Imaging to Characterize Tropical Cyclone Structure and Environment
NASA Astrophysics Data System (ADS)
Cossuth, J.; Bankert, R.; Richardson, K.; Surratt, M. L.
2016-12-01
The Naval Research Laboratory's (NRL) tropical cyclone (TC) web page (http://www.nrlmry.navy.mil/TC.html) has provided nearly two decades of near real-time access to TC-centric images and products by TC forecasters and enthusiasts around the world. Particularly, microwave imager and sounder information that is featured on this site provides crucial internal storm structure information by allowing users to perceive hydrometeor structure, providing key details beyond cloud top information provided by visible and infrared channels. Towards improving TC analysis techniques and helping advance the utility of the NRL TC webpage resource, new research efforts are presented. This work demonstrates results as well as the methodology used to develop new automated, objective satellite-based TC structure and intensity guidance and enhanced data fusion imagery products that aim to bolster and streamline TC forecast operations. This presentation focuses on the creation and interpretation of false color RGB composite imagery that leverages the different emissive and scattering properties of atmospheric ice, liquid, and vapor water as well as ocean surface roughness as seen by microwave radiometers. Specifically, a combination of near-realtime data and a standardized digital database of global TCs in microwave imagery from 1987-2012 is employed as a climatology of TC structures. The broad range of TC structures, from pinhole eyes through multiple eyewall configurations, is characterized as resolved by passive microwave sensors. The extraction of these characteristic features from historical data also lends itself to statistical analysis. For example, histograms of brightness temperature distributions allows a rigorous examination of how structural features are conveyed in image products, allowing a better representation of colors and breakpoints as they relate to physical features. Such climatological work also suggests steps to better inform the near-real time application of upcoming satellite datasets to TC analyses.
Pirat, Bahar; Khoury, Dirar S.; Hartley, Craig J.; Tiller, Les; Rao, Liyun; Schulz, Daryl G.; Nagueh, Sherif F.; Zoghbi, William A.
2012-01-01
Objectives The aim of this study was to validate a novel, angle-independent, feature-tracking method for the echocardiographic quantitation of regional function. Background A new echocardiographic method, Velocity Vector Imaging (VVI) (syngo Velocity Vector Imaging technology, Siemens Medical Solutions, Ultrasound Division, Mountain View, California), has been introduced, based on feature tracking—incorporating speckle and endocardial border tracking, that allows the quantitation of endocardial strain, strain rate (SR), and velocity. Methods Seven dogs were studied during baseline, and various interventions causing alterations in regional function: dobutamine, 5-min coronary occlusion with reperfusion up to 1 h, followed by dobutamine and esmolol infusions. Echocardiographic images were acquired from short- and long-axis views of the left ventricle. Segment-length sonomicrometry crystals were used as the reference method. Results Changes in systolic strain in ischemic segments were tracked well with VVI during the different states of regional function. There was a good correlation between circumferential and longitudinal systolic strain by VVI and sonomicrometry (r = 0.88 and r = 0.83, respectively, p < 0.001). Strain measurements in the nonischemic basal segments also demonstrated a significant correlation between the 2 methods (r = 0.65, p < 0.001). Similarly, a significant relation was observed for circumferential and longitudinal SR between the 2 methods (r = 0.94, p < 0.001 and r = 0.90, p < 0.001, respectively). The endocardial velocity relation to changes in strain by sonomicrometry was weaker owing to significant cardiac translation. Conclusions Velocity Vector Imaging, a new feature-tracking method, can accurately assess regional myocardial function at the endocardial level and is a promising clinical tool for the simultaneous quantification of regional and global myocardial function. PMID:18261685
Pirat, Bahar; Khoury, Dirar S; Hartley, Craig J; Tiller, Les; Rao, Liyun; Schulz, Daryl G; Nagueh, Sherif F; Zoghbi, William A
2008-02-12
The aim of this study was to validate a novel, angle-independent, feature-tracking method for the echocardiographic quantitation of regional function. A new echocardiographic method, Velocity Vector Imaging (VVI) (syngo Velocity Vector Imaging technology, Siemens Medical Solutions, Ultrasound Division, Mountain View, California), has been introduced, based on feature tracking-incorporating speckle and endocardial border tracking, that allows the quantitation of endocardial strain, strain rate (SR), and velocity. Seven dogs were studied during baseline, and various interventions causing alterations in regional function: dobutamine, 5-min coronary occlusion with reperfusion up to 1 h, followed by dobutamine and esmolol infusions. Echocardiographic images were acquired from short- and long-axis views of the left ventricle. Segment-length sonomicrometry crystals were used as the reference method. Changes in systolic strain in ischemic segments were tracked well with VVI during the different states of regional function. There was a good correlation between circumferential and longitudinal systolic strain by VVI and sonomicrometry (r = 0.88 and r = 0.83, respectively, p < 0.001). Strain measurements in the nonischemic basal segments also demonstrated a significant correlation between the 2 methods (r = 0.65, p < 0.001). Similarly, a significant relation was observed for circumferential and longitudinal SR between the 2 methods (r = 0.94, p < 0.001 and r = 0.90, p < 0.001, respectively). The endocardial velocity relation to changes in strain by sonomicrometry was weaker owing to significant cardiac translation. Velocity Vector Imaging, a new feature-tracking method, can accurately assess regional myocardial function at the endocardial level and is a promising clinical tool for the simultaneous quantification of regional and global myocardial function.
Resolved spectrophotometric properties of the Ceres surface from Dawn Framing Camera images
NASA Astrophysics Data System (ADS)
Schröder, S. E.; Mottola, S.; Carsenty, U.; Ciarniello, M.; Jaumann, R.; Li, J.-Y.; Longobardo, A.; Palmer, E.; Pieters, C.; Preusker, F.; Raymond, C. A.; Russell, C. T.
2017-05-01
We present a global spectrophotometric characterization of the Ceres surface using Dawn Framing Camera (FC) images. We identify the photometric model that yields the best results for photometrically correcting images. Corrected FC images acquired on approach to Ceres were assembled into global maps of albedo and color. Generally, albedo and color variations on Ceres are muted. The albedo map is dominated by a large, circular feature in Vendimia Planitia, known from HST images (Li et al., 2006), and dotted by smaller bright features mostly associated with fresh-looking craters. The dominant color variation over the surface is represented by the presence of "blue" material in and around such craters, which has a negative spectral slope over the visible wavelength range when compared to average terrain. We also mapped variations of the phase curve by employing an exponential photometric model, a technique previously applied to asteroid Vesta (Schröder et al., 2013b). The surface of Ceres scatters light differently from Vesta in the sense that the ejecta of several fresh-looking craters may be physically smooth rather than rough. High albedo, blue color, and physical smoothness all appear to be indicators of youth. The blue color may result from the desiccation of ejected material that is similar to the phyllosilicates/water ice mixtures in the experiments of Poch et al. (2016). The physical smoothness of some blue terrains would be consistent with an initially liquid condition, perhaps as a consequence of impact melting of subsurface water ice. We find red terrain (positive spectral slope) near Ernutet crater, where De Sanctis et al. (2017) detected organic material. The spectrophotometric properties of the large Vendimia Planitia feature suggest it is a palimpsest, consistent with the Marchi et al. (2016) impact basin hypothesis. The central bright area in Occator crater, Cerealia Facula, is the brightest on Ceres with an average visual normal albedo of about 0.6 at a resolution of 1.3 km per pixel (six times Ceres average). The albedo of fresh, bright material seen inside this area in the highest resolution images (35 m per pixel) is probably around unity. Cerealia Facula has an unusually steep phase function, which may be due to unresolved topography, high surface roughness, or large average particle size. It has a strongly red spectrum whereas the neighboring, less-bright, Vinalia Faculae are neutral in color. We find no evidence for a diurnal ground fog-type haze in Occator as described by Nathues et al. (2015). We can neither reproduce their findings using the same images, nor confirm them using higher resolution images. FC images have not yet offered direct evidence for present sublimation in Occator.
High-resolution Ceres LAMO atlas derived from Dawn FC images
NASA Astrophysics Data System (ADS)
Roatsch, T.; Kersten, E.; Matz, K. D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C.
2016-12-01
Introduction: NASA's Dawn spacecraft has been orbiting the dwarf planet Ceres since December 2015 in LAMO (High Altitude Mapping Orbit) with an altitude of about 400 km to characterize for instance the geology, topography, and shape of Ceres. One of the major goals of this mission phase is the global high-resolution mapping of Ceres. Data: The Dawn mission is equipped with a fram-ing camera (FC). The framing camera took until the time of writing about 27,500 clear filter images in LAMO with a resolution of about 30 m/pixel and dif-ferent viewing angles and different illumination condi-tions. Data Processing: The first step of the processing chain towards the cartographic products is to ortho-rectify the images to the proper scale and map projec-tion type. This process requires detailed information of the Dawn orbit and attitude data and of the topography of the target. A high-resolution shape model was provided by stereo processing of the HAMO dataset, orbit and attitude data are available as reconstructed SPICE data. Ceres' HAMO shape model is used for the calculation of the ray intersection points while the map projection itself was done onto a reference sphere of Ceres. The final step is the controlled mosaicking of all nadir images to a global mosaic of Ceres, the so called basemap. Ceres map tiles: The Ceres atlas will be produced in a scale of 1:250,000 and will consist of 62 tiles that conforms to the quadrangle schema for Venus at 1:5,000,000. A map scale of 1:250,000 is a compro-mise between the very high resolution in LAMO and a proper map sheet size of the single tiles. Nomenclature: The Dawn team proposed to the International Astronomical Union (IAU) to use the names of gods and goddesses of agriculture and vege-tation from world mythology as names for the craters and to use names of agricultural festivals of the world for other geological features. This proposal was ac-cepted by the IAU and the team proposed 92 names for geological features to the IAU based on the LAMO mosaic. These feature names will be applied to the map tiles.
Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning
Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka
2012-01-01
Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849
Bhalerao, Gaurav Vivek; Parlikar, Rujuta; Agrawal, Rimjhim; Shivakumar, Venkataram; Kalmady, Sunil V; Rao, Naren P; Agarwal, Sri Mahavir; Narayanaswamy, Janardhanan C; Reddy, Y C Janardhan; Venkatasubramanian, Ganesan
2018-06-01
Spatial normalization of brain MR images is highly dependent on the choice of target brain template. Morphological differences caused by factors like genetic and environmental exposures, generates a necessity to construct population specific brain templates. Brain image analysis performed using brain templates from Caucasian population may not be appropriate for non-Caucasian population. In this study, our objective was to construct an Indian brain template from a large population (N = 157 subjects) and compare the morphometric parameters of this template with that of Chinese-56 and MNI-152 templates. In addition, using an independent MRI data of 15 Indian subjects, we also evaluated the potential registration accuracy differences using these three templates. Indian brain template was constructed using iterative routines as per established procedures. We compared our Indian template with standard MNI-152 template and Chinese template by measuring global brain features. We also examined accuracy of registration by aligning 15 new Indian brains to Indian, Chinese and MNI templates. Furthermore, we supported our measurement protocol with inter-rater and intra-rater reliability analysis. Our results showed that there were significant differences in global brain features of Indian template in comparison with Chinese and MNI brain templates. The results of registration accuracy analysis revealed that fewer deformations are required when Indian brains are registered to Indian template as compared to Chinese and MNI templates. This study concludes that population specific Indian template is likely to be more appropriate for structural and functional image analysis of Indian population. Copyright © 2018 Elsevier B.V. All rights reserved.
Venus winds at cloud level from VIRTIS during the Venus Express mission
NASA Astrophysics Data System (ADS)
Hueso, Ricardo; Peralta, Javier; Sánchez-Lavega, Agustín.; Pérez-Hoyos, Santiago; Piccioni, Giuseppe; Drossart, Pierre
2010-05-01
The Venus Express (VEX) mission has been in orbit to Venus for almost four years now. The VIRTIS instrument onboard VEX observes Venus in two channels (visible and infrared) obtaining spectra and multi-wavelength images of the planet. Images in the ultraviolet range are used to study the upper cloud at 66 km while images in the infrared (1.74 μm) map the opacity of the lower cloud deck at 48 km. Here we present our latest results on the analysis of the global atmospheric dynamics at these cloud levels using a large selection over the full VIRTIS dataset. We will show the atmospheric zonal superrotation at these levels and the mean meridional motions. The zonal winds are very stable in the lower cloud at mid-latitudes to the tropics while it shows different signatures of variability in the upper cloud where solar tide effects are manifest in the data. While the upper clouds present a net meridional motion consistent with the upper branch of a Hadley cell the lower cloud present almost null global meridional motions at all latitudes but with particular features traveling both northwards and southwards in a turbulent manner depending on the cloud morphology on the observations. A particular important atmospheric feature is the South Polar vortex which might be influencing the structure of the zonal winds in the lower cloud at latitudes from the vortex location up to 55°S. Acknowledgements This work has been funded by the Spanish MICIIN AYA2009-10701 with FEDER support and Grupos Gobierno Vasco IT-464-07.
NASA Technical Reports Server (NTRS)
Kuzmin, R. O.; Mitrofanov, I. G.; Litvak, M. L.; Boynton, M. V.; Saunders, R. S.
2003-01-01
The first results from global mapping of the neutron albedo from Mars by HEND instrument have shown the noticeable deficit of both the epithermal (EN) and the fast (FN) neutrons counts rate in the high latitudes regions of both hemispheres of the planet. The deficit is indicative for high enriching of the surface regolith by hydrogen, which may correspond to amount of any water phases and forms. The objectives of our study are the spatial and temporal variations of the free water (ice) signature in the Martian surface layer on the base of HEND/ODYSSEY data and their correlation with spatial spreading of some permafrost features, mapped on the base of MOC images. For the study we used the results of the global mapping (pixel 5 x5 ) of EN and FN albedo, realized by HEND/ODYSSEY in the period from 17 February to 10 December 2002 year.
Global Dynamics of Dayside Auroral Precipitation in Conjunction with Solar Wind Pressure Pulses
NASA Technical Reports Server (NTRS)
Brittnacher, M.; Chua, D.; Fillingim, M.; Parks, G. K.; Spann, James F., Jr.; Germany, G. A.; Carlson, C. W.; Greenwald, R. A.
1999-01-01
Global observation of the dayside auroral region by the Ultraviolet Imager (UVI) during transient solar wind pressure pulse events on October 1, 1997 has revealed unusual features in the auroral precipitation. The auroral arc structure on the dayside, possibly connected with the LLBL, split into 2 arc structures; one moving poleward and fading over a 5 min period, and the other stationary or slightly shifted equatorward (by changes in the x component). The y component was large and positive, and the z component was small and negative. The splitting of the arc structure extended from 9 to 15 MLT and was concurrent with an enhancement of the convection in the cusp region identified by SuperDARN observations. The convection reversal on the morningside was adjacent to and poleward of the weak lower latitude band of precipitation. The sensitivity of the UVI instrument enabled observation of arc structures down to about 0.2 erg electron energy flux, as confirmed by comparison with particle measurements from the FAST satellite for other dayside events. Removal of the spacecraft wobble by PIXON image reconstruction restored the original resolution of the UVI of about 40 km from apogee. This event is being analyzed in connection with a larger study of global dynamics of dayside energy and momentum transfer related to changes in IMF conditions using UVI images in conjunction with observations from FAST and SuperDARN.
Mars' "White Rock" feature lacks evidence of an aqueous origin: Results from Mars Global Surveyor
Ruff, S.W.; Christensen, P.R.; Clark, R.N.; Kieffer, H.H.; Malin, M.C.; Bandfield, J.L.; Jakosky, B.M.; Lane, M.D.; Mellon, M.T.; Presley, M.A.
2001-01-01
The "White Rock" feature on Mars has long been viewed as a type example for a Martian playa largely because of its apparent high albedo along with its location in a topographic basin (a crater). Data from the Mars Global Surveyor Thermal Emission Spectrometer (TES) demonstrate that White Rock is not anomalously bright relative to other Martian bright regions, reducing the significance of its albedo and weakening the analogy to terrestrial playas. Its thermal inertia value indicates that it is not mantled by a layer of loose dust, nor is it bedrock. The thermal infrared spectrum of White Rock shows no obvious features of carbonates or sulfates and is, in fact, spectrally flat. Images from the Mars Orbiter Camera show that the White Rock massifs are consolidated enough to retain slopes and allow the passage of saltating grains over their surfaces. Material appears to be shed from the massifs and is concentrated at the crests of nearby bedforms. One explanation for these observations is that White Rock is an eroded accumulation of compacted or weakly cemented aeolian sediment. Copyright 2001 by the American Geophysical Union.
Extraction of linear features on SAR imagery
NASA Astrophysics Data System (ADS)
Liu, Junyi; Li, Deren; Mei, Xin
2006-10-01
Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.
Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2015-10-01
Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.
Toe of Ganges Chasma Landslide ( 8.0 S, 44.4W)
NASA Technical Reports Server (NTRS)
2001-01-01
This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows shear striations, dark dunes banked up against the toe of the slide and over-riding light-toned ripples and boulders on surface of slide. These features can be used to determine quantitative aspects of surface processes.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; McEntee, Mark; Evanoff, Michael G.; Brennan, Patrick C.
2012-02-01
Aim: This study evaluates the assumption that global impression is created based on low spatial frequency components of posterior-anterior chest radiographs. Background: Expert radiologists precisely and rapidly allocate visual attention on pulmonary nodules chest radiographs. Moreover, the most frequent accurate decisions are produced in the shortest viewing time, thus, the first hundred milliseconds of image perception seems be crucial for correct interpretation. Medical image perception model assumes that during holistic analysis experts extract information based on low spatial frequency (SF) components and creates a mental map of suspicious location for further inspection. The global impression results in flagged regions for detailed inspection with foveal vision. Method: Nine chest experts and nine non-chest radiologists viewed two sets of randomly ordered chest radiographs under 2 timing conditions: (1) 300ms; (2) free search in unlimited time. The same radiographic cases of 25 normal and 25 abnormal digitalized chest films constituted two image sets: low-pass filtered and unfiltered. Subjects were asked to detect nodules and rank confidence level. MRMC ROC DBM analyses were conducted. Results: Experts had improved ROC AUC while high SF components are displayed (p=0.03) or while low SF components were viewed under unlimited time (p=0.02) compared with low SF 300mSec viewings. In contrast, non-chest radiologists showed no significant changes when high SF are displayed under flash conditions compared with free search or while low SF components were viewed under unlimited time compared with flash. Conclusion: The current medical image perception model accurately predicted performance for non-chest radiologists, however chest experts appear to benefit from high SF features during the global impression.
Deep Learning in Label-free Cell Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia
Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individualmore » cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. In conclusion, this system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.« less
Deep Learning in Label-free Cell Classification
Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; ...
2016-03-15
Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individualmore » cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. In conclusion, this system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.« less
Differentiation of Glioblastoma and Lymphoma Using Feature Extraction and Support Vector Machine.
Yang, Zhangjing; Feng, Piaopiao; Wen, Tian; Wan, Minghua; Hong, Xunning
2017-01-01
Differentiation of glioblastoma multiformes (GBMs) and lymphomas using multi-sequence magnetic resonance imaging (MRI) is an important task that is valuable for treatment planning. However, this task is a challenge because GBMs and lymphomas may have a similar appearance in MRI images. This similarity may lead to misclassification and could affect the treatment results. In this paper, we propose a semi-automatic method based on multi-sequence MRI to differentiate these two types of brain tumors. Our method consists of three steps: 1) the key slice is selected from 3D MRIs and region of interests (ROIs) are drawn around the tumor region; 2) different features are extracted based on prior clinical knowledge and validated using a t-test; and 3) features that are helpful for classification are used to build an original feature vector and a support vector machine is applied to perform classification. In total, 58 GBM cases and 37 lymphoma cases are used to validate our method. A leave-one-out crossvalidation strategy is adopted in our experiments. The global accuracy of our method was determined as 96.84%, which indicates that our method is effective for the differentiation of GBM and lymphoma and can be applied in clinical diagnosis. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Imaging black holes: past, present and future
NASA Astrophysics Data System (ADS)
Falcke, Heino
2017-12-01
This paper briefly reviews past, current, and future efforts to image black holes. Black holes seem like mystical objects, but they are an integral part of current astrophysics and are at the center of attempts to unify quantum physics and general relativity. Yet, nobody has ever seen a black hole. What do they look like? Initially, this question seemed more of an academic nature. However, this has changed over the past two decades. Observations and theoretical considerations suggest that the supermassive black hole, Sgr A*, in the center of our Milky Way is surrounded by a compact, foggy emission region radiating at and above 230 GHz. It has been predicted that the event horizon of Sgr A* should cast its shadow onto that emission region, which could be detectable with a global VLBI array of radio telescopes. In contrast to earlier pictures of black holes, that dark feature is not supposed to be due to a hole in the accretion flow, but would represent a true negative image of the event horizon. Currently, the global Event Horizon Telescope consortium is attempting to make such an image. In the future those images could be improved by adding more telescopes to the array, in particular at high sites in Africa. Ultimately, a space array at THz frequencies, the Event Horizon Imager, could produce much more detailed images of black holes. In combination with numerical simulations and precise measurements of the orbits of stars - ideally also of pulsars - these images will allow us to study black holes with unprecedented precision.
Karur, Gauri R; Robison, Sean; Iwanochko, Robert M; Morel, Chantal F; Crean, Andrew M; Thavendiranathan, Paaladinesh; Nguyen, Elsie T; Mathur, Shobhit; Wasim, Syed; Hanneman, Kate
2018-04-24
Purpose To compare left ventricular (LV) and right ventricular (RV) 3.0-T cardiac magnetic resonance (MR) imaging T1 values in Anderson-Fabry disease (AFD) and hypertrophic cardiomyopathy (HCM) and evaluate the diagnostic value of native T1 values beyond age, sex, and conventional imaging features. Materials and Methods For this prospective study, 30 patients with gene-positive AFD (37% male; mean age ± standard deviation, 45.0 years ± 14.1) and 30 patients with HCM (57% male; mean age, 49.3 years ± 13.5) were prospectively recruited between June 2016 and September 2017 to undergo cardiac MR imaging T1 mapping with a modified Look-Locker inversion recovery (MOLLI) acquisition scheme at 3.0 T (repetition time msec/echo time msec, 280/1.12; section thickness, 8 mm). LV and RV T1 values were evaluated. Statistical analysis included independent samples t test, receiver operating characteristic curve analysis, multivariable logistic regression, and likelihood ratio test. Results Septal LV, global LV, and RV native T1 values were significantly lower in AFD compared with those in HCM (1161 msec ± 47 vs 1296 msec ± 55, respectively [P < .001]; 1192 msec ± 52 vs 1268 msec ± 55 [P < .001]; and 1221 msec ± 54 vs 1271 msec ± 37 [P = .001], respectively). A septal LV native T1 cutoff point of 1220 msec or lower distinguished AFD from HCM with sensitivity of 97%, specificity of 93%, and accuracy of 95%. Septal LV native T1 values differentiated AFD from HCM after adjustment for age, sex, and conventional imaging features (odds ratio, 0.94; 95% confidence interval: 0.91, 0.98; P = < .001). In a nested logistic regression model with age, sex, and conventional imaging features, model fit was significantly improved by the addition of septal LV native T1 values (χ 2 [df = 1] = 33.4; P < .001). Conclusion Cardiac MR imaging native T1 values at 3.0 T are significantly lower in patients with AFD compared with those with HCM and provide independent and incremental diagnostic value beyond age, sex, and conventional imaging features. © RSNA, 2018.
a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.
2017-09-01
We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.
NASA/NOAA Electronic Theater: 90 Minutes of Spectacular Visualization
NASA Technical Reports Server (NTRS)
Hasler, A. F.
2004-01-01
The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to Ashville and the Conference Auditorium. Zoom through the Cosmos to SLC and site of the 2002 Winter Olympics using 1 m IKONOS 'Spy Satellite' data. Contrast the 1972 Apollo 17 'Blue Marble' image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, & Landsat 7, of storms & fires like Hurricane Isabel and the LA/San Diego Fire Storms of 2003. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual blooming of the northern hemisphere land masses and oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere and Oceans are shown. See the currents and vortexes in the Oceans that bring up the nutrients blooms in response to El Nino/La Nina climate changes. The Etheater will be presented using the latest High Definition TV (HDTV) and video projection technology on a large screen. See the global city lights, and the great NE US blackout of August 2003 observed by the 'night-vision' DMSP satellite.
The NASA/NOAA Electronic Theater
NASA Technical Reports Server (NTRS)
Hasler, A. F.
2003-01-01
The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to Cambridge and Harvard University. Zoom through the Cosmos to SLC and site of the 2002 Winter Olympics using 1 m IKONOS "Spy Satellite" data. Contrast the 1972 Apollo 17 "Blue Marble" image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, & Landsat 7, of storms & fires like Hurricane Isabel and the LNSan Diego firestorms of 2003. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual blooming of the northern hemisphere landmasses and oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere & oceans are shown. See the currents and vortexes in the oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fishermen. See the how the ocean blooms in response to El Niiioh Niiia climate changes. The Etheater will be presented using the latest High Definition TV (HDTV) and video projection technology on a large screen. See the global city lights, and the great NE US blackout of August 2003 observed by the "night-vision" DMSP satellite.
Investigation of Carbon Fiber Architecture in Braided Composites Using X-Ray CT Inspection
NASA Technical Reports Server (NTRS)
Rhoads, Daniel J.; Miller, Sandi G.; Roberts, Gary D.; Rauser, Richard W.; Golovaty, Dmitry; Wilber, J. Patrick; Espanol, Malena I.
2017-01-01
During the fabrication of braided carbon fiber composite materials, process variations occur which affect the fiber architecture. Quantitative measurements of local and global fiber architecture variations are needed to determine the potential effect of process variations on mechanical properties of the cured composite. Although non-destructive inspection via X-ray CT imaging is a promising approach, difficulties in quantitative analysis of the data arise due to the similar densities of the material constituents. In an effort to gain more quantitative information about features related to fiber architecture, methods have been explored to improve the details that can be captured by X-ray CT imaging. Metal-coated fibers and thin veils are used as inserts to extract detailed information about fiber orientations and inter-ply behavior from X-ray CT images.
IKONOS geometric characterization
Helder, Dennis; Coan, Michael; Patrick, Kevin; Gaska, Peter
2003-01-01
The IKONOS spacecraft acquired images on July 3, 17, and 25, and August 13, 2001 of Brookings SD, a small city in east central South Dakota, and on May 22, June 30, and July 30, 2000, of the rural area around the EROS Data Center. South Dakota State University (SDSU) evaluated the Brookings scenes and the USGS EROS Data Center (EDC) evaluated the other scenes. The images evaluated by SDSU utilized various natural objects and man-made features as identifiable targets randomly distribution throughout the scenes, while the images evaluated by EDC utilized pre-marked artificial points (panel points) to provide the best possible targets distributed in a grid pattern. Space Imaging provided products at different processing levels to each institution. For each scene, the pixel (line, sample) locations of the various targets were compared to field observed, survey-grade Global Positioning System locations. Patterns of error distribution for each product were plotted, and a variety of statistical statements of accuracy are made. The IKONOS sensor also acquired 12 pairs of stereo images of globally distributed scenes between April 2000 and April 2001. For each scene, analysts at the National Imagery and Mapping Agency (NIMA) compared derived photogrammetric coordinates to their corresponding NIMA field-surveyed ground control point (GCPs). NIMA analysts determined horizontal and vertical accuracies by averaging the differences between the derived photogrammetric points and the field-surveyed GCPs for all 12 stereo pairs. Patterns of error distribution for each scene are presented.
NASA Astrophysics Data System (ADS)
Guldner, Ian H.; Yang, Lin; Cowdrick, Kyle R.; Wang, Qingfei; Alvarez Barrios, Wendy V.; Zellmer, Victoria R.; Zhang, Yizhe; Host, Misha; Liu, Fang; Chen, Danny Z.; Zhang, Siyuan
2016-04-01
Metastatic microenvironments are spatially and compositionally heterogeneous. This seemingly stochastic heterogeneity provides researchers great challenges in elucidating factors that determine metastatic outgrowth. Herein, we develop and implement an integrative platform that will enable researchers to obtain novel insights from intricate metastatic landscapes. Our two-segment platform begins with whole tissue clearing, staining, and imaging to globally delineate metastatic landscape heterogeneity with spatial and molecular resolution. The second segment of our platform applies our custom-developed SMART 3D (Spatial filtering-based background removal and Multi-chAnnel forest classifiers-based 3D ReconsTruction), a multi-faceted image analysis pipeline, permitting quantitative interrogation of functional implications of heterogeneous metastatic landscape constituents, from subcellular features to multicellular structures, within our large three-dimensional (3D) image datasets. Coupling whole tissue imaging of brain metastasis animal models with SMART 3D, we demonstrate the capability of our integrative pipeline to reveal and quantify volumetric and spatial aspects of brain metastasis landscapes, including diverse tumor morphology, heterogeneous proliferative indices, metastasis-associated astrogliosis, and vasculature spatial distribution. Collectively, our study demonstrates the utility of our novel integrative platform to reveal and quantify the global spatial and volumetric characteristics of the 3D metastatic landscape with unparalleled accuracy, opening new opportunities for unbiased investigation of novel biological phenomena in situ.
Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong
2013-01-07
Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.
Jin, Shuo; Li, Dengwang; Yin, Yong
2013-01-01
Accurate registration of 18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from 18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381
NASA Astrophysics Data System (ADS)
Cunliffe, Alexandra R.; Al-Hallaq, Hania A.; Fei, Xianhan M.; Tuohy, Rachel E.; Armato, Samuel G.
2013-02-01
To determine how 19 image texture features may be altered by three image registration methods, "normal" baseline and follow-up computed tomography (CT) scans from 27 patients were analyzed. Nineteen texture feature values were calculated in over 1,000 32x32-pixel regions of interest (ROIs) randomly placed in each baseline scan. All three methods used demons registration to map baseline scan ROIs to anatomically matched locations in the corresponding transformed follow-up scan. For the first method, the follow-up scan transformation was subsampled to achieve a voxel size identical to that of the baseline scan. For the second method, the follow-up scan was transformed through affine registration to achieve global alignment with the baseline scan. For the third method, the follow-up scan was directly deformed to the baseline scan using demons deformable registration. Feature values in matched ROIs were compared using Bland- Altman 95% limits of agreement. For each feature, the range spanned by the 95% limits was normalized to the mean feature value to obtain the normalized range of agreement, nRoA. Wilcoxon signed-rank tests were used to compare nRoA values across features for the three methods. Significance for individual tests was adjusted using the Bonferroni method. nRoA was significantly smaller for affine-registered scans than for the resampled scans (p=0.003), indicating lower feature value variability between baseline and follow-up scan ROIs using this method. For both of these methods, however, nRoA was significantly higher than when feature values were calculated directly on demons-deformed followup scans (p<0.001). Across features and methods, nRoA values remained below 26%.
Unmasking Language Lateralization in Human Brain Intrinsic Activity
McAvoy, Mark; Mitra, Anish; Coalson, Rebecca S.; d'Avossa, Giovanni; Keidel, James L.; Petersen, Steven E.; Raichle, Marcus E.
2016-01-01
Lateralization of function is a fundamental feature of the human brain as exemplified by the left hemisphere dominance of language. Despite the prominence of lateralization in the lesion, split-brain and task-based fMRI literature, surprisingly little asymmetry has been revealed in the increasingly popular functional imaging studies of spontaneous fluctuations in the fMRI BOLD signal (so-called resting-state fMRI). Here, we show the global signal, an often discarded component of the BOLD signal in resting-state studies, reveals a leftward asymmetry that maps onto regions preferential for semantic processing in left frontal and temporal cortex and the right cerebellum and a rightward asymmetry that maps onto putative attention-related regions in right frontal, temporoparietal, and parietal cortex. Hemispheric asymmetries in the global signal resulted from amplitude modulation of the spontaneous fluctuations. To confirm these findings obtained from normal, healthy, right-handed subjects in the resting-state, we had them perform 2 semantic processing tasks: synonym and numerical magnitude judgment and sentence comprehension. In addition to establishing a new technique for studying lateralization through functional imaging of the resting-state, our findings shed new light on the physiology of the global brain signal. PMID:25636911
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
High resolution Ceres HAMO atlas derived from Dawn FC images
NASA Astrophysics Data System (ADS)
Roatsch, Thomas; Kersten, Elke; Matz, Klaus-Dieter; Preusker, Frank; Scholten, Frank; Jaumann, Ralf; Raymond, Carol A.; Russell, Chris T.
2016-04-01
Introduction: NASA's Dawn spacecraft entered the orbit of dwarf planet Ceres in March 2015, and will characterize the geology, elemental and mineralogical composition, topography, shape, and internal structure of Ceres. One of the major goals of the mission is a global mapping of Ceres. Data: The Dawn mission was mapping Ceres in HAMO (High Altitude Mapping Orbit, 1475 km altitude) between August and October 2015. The framing camera took about 2,600 clear filter images with a resolution of about 140 m/pixel during these cycles. The images were taken with different viewing angles and different illumination conditions. We selected images from one cycle (cycle #1) for the mosaicking process to have similar viewing and illumination conditions. Very minor gaps in the coverage were filled with a few images from cycle #2. Data Processing: The first step of the processing chain towards the cartographic products is to ortho-rectify the images to the proper scale and map projec-tion type. This process requires detailed information of the Dawn orbit and attitude data and of the topography of the targets. Both, improved orientation and a high-resolution shape model, are provided by stereo processing (bundle block adjustment) of the HAMO stereo image dataset [3]. Ceres's HAMO shape model was used for the calculation of the ray intersection points while the map projection itself was done onto the reference sphere of Ceres with a radius of 470 km. The final step is the controlled mosaicking) of all images to a global mosaic of Ceres, the so-called basemap. Ceres map tiles: The Ceres atlas was produced in a scale of 1:750,000 and consists of 15 tiles that conform to the quadrangle scheme proposed by Greeley and Batson [4]. A map scale of 1:750,000 guarantees a mapping at the highest available Dawn resolution in HAMO. The individual tiles were extracted from the global mosaic and reprojected. Nomenclature: The Dawn team proposed 81 names for geological features. By international agreement, craters must be named after gods and goddesses of agriculture and vegetation from world mythology, whereas other geological features must be named after agricultural festivals of the world. The nomenclature proposed by the Dawn team was approved by the IAU [http://planetarynames.wr.usgs.gov/] and is shown in Fig. 1. The entire Ceres HAMO atlas will be available to the public through the Dawn GIS web page [http://dawngis.dlr.de/atlas]. References: [1] Russell, C.T. and Raymond, C.A., Space Sci. Rev., 163, DOI 10.1007/s11214-011-9836-2; [2] Sierks, et al., 2011, Space Sci. Rev., 163, DOI 10.1007/s11214-011-9745-4; [3] Preusker, F. et al., this session; [4] Greeley, R. and Batson, G., 1990, Planetary Mapping, Cambridge University Press.
1990-02-14
Range : 1.4 to 2 million miles These are enhanced versions of four views of Venus taken by Galileo's Solid State Imaging System. The pictures in the top row were taken about 4 and 5 days after closest approach, and those in the bottom row 6 days after closest approach, 2 hours apart. These show the faint Venusian cloud features vary clearly. A high-pass filter way applied to bring out broader global variations in tone. The bright polar hoods are a well-known feature of Venus. Of particular interest to planetary atmospheric scientists are the complex cloud patterns near the equator, in the vicinity of the bright subsolar point, where convection is most prevalent.
NASA Technical Reports Server (NTRS)
2004-01-01
5 May 2004 Most middle-latitude craters on Mars have strange landforms on their floors. Often, the floors have pitted and convoluted features that lack simple explanation. In this case, the central part of the crater floor shown in this 2004 Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image bears some resemblance to the folded nature of a brain. Or not. It depends upon the 'eye of the beholder,' perhaps. The light-toned 'ring' around the 'brain' feature is more easily explained--windblown ripples and dunes. The crater occurs near 33.1oS, 91.2oW, and is illuminated from the upper left. The picture covers an area about 3 km (1.9 mi) across.Detecting multiple moving objects in crowded environments with coherent motion regions
Cheriyadat, Anil M.; Radke, Richard J.
2013-06-11
Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.
Internet-based videoconferencing and data collaboration for the imaging community.
Poon, David P; Langkals, John W; Giesel, Frederik L; Knopp, Michael V; von Tengg-Kobligk, Hendrik
2011-01-01
Internet protocol-based digital data collaboration with videoconferencing is not yet well utilized in the imaging community. Videoconferencing, combined with proven low-cost solutions, can provide reliable functionality and speed, which will improve rapid, time-saving, and cost-effective communications, within large multifacility institutions or globally with the unlimited reach of the Internet. The aim of this project was to demonstrate the implementation of a low-cost hardware and software setup that facilitates global data collaboration using WebEx and GoToMeeting Internet protocol-based videoconferencing software. Both products' features were tested and evaluated for feasibility across 2 different Internet networks, including a video quality and recording assessment. Cross-compatibility with an Apple OS is also noted in the evaluations. Departmental experiences with WebEx pertaining to clinical trials are also described. Real-time remote presentation of dynamic data was generally consistent across platforms. A reliable and inexpensive hardware and software setup for complete Internet-based data collaboration/videoconferencing can be achieved.
Mars Global Coverage by Context Camera on MRO
2017-03-29
In early 2017, after more than a decade of observing Mars, the Context Camera (CTX) on NASA's Mars Reconnaissance Orbiter (MRO) surpassed 99 percent coverage of the entire planet. This mosaic shows that global coverage. No other camera has ever imaged so much of Mars in such high resolution. The mosaic offers a resolution that enables zooming in for more detail of any region of Mars. It is still far from the full resolution of individual CTX observations, which can reveal the shapes of features smaller than the size of a tennis court. As of March 2017, the Context Camera has taken about 90,000 images since the spacecraft began examining Mars from orbit in late 2006. In addition to covering 99.1 percent of the surface of Mars at least once, this camera has observed more than 60 percent of Mars more than once, checking for changes over time and providing stereo pairs for 3-D modeling of the surface. http://photojournal.jpl.nasa.gov/catalog/PIA21488
Seismic tomography; theory and practice
Iver, H.M.; Hirahara, Kazuro
1993-01-01
Although highly theoretical and computer-orientated, seismic tomography has created spectacular images of anomolies within the Earth with dimensions of thousands of kilometers to few tens of meters. These images have enabled Earth scientists working on diverse areas to attack fundamental problems relating to the deep dynamical processes within our planet. Additionally, this technique is being used extensively to study the Earth's hazardous regions such as earthquake fault zones and volcanoes, as well as features beneficial to man such as oil or mineral-bearing structures. This book has been written by world experts and describes the theories, experimental and analytical procedures and results of applying seismic tomography from global to purely local scale. It represents the collective global perspective on the state of the art and focusses not only on the theoretical and practical aspects, but also on the uses for hydrocarbon, mineral and geothermal exploitation. Students and researchers in the Earth sciences, and research and exploration geophysicists should find this a useful, practical reference book for all aspects of their work.
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Quantitative Machine Learning Analysis of Brain MRI Morphology throughout Aging.
Shamir, Lior; Long, Joe
2016-01-01
While cognition is clearly affected by aging, it is unclear whether the process of brain aging is driven solely by accumulation of environmental damage, or involves biological pathways. We applied quantitative image analysis to profile the alteration of brain tissues during aging. A dataset of 463 brain MRI images taken from a cohort of 416 subjects was analyzed using a large set of low-level numerical image content descriptors computed from the entire brain MRI images. The correlation between the numerical image content descriptors and the age was computed, and the alterations of the brain tissues during aging were quantified and profiled using machine learning. The comprehensive set of global image content descriptors provides high Pearson correlation of ~0.9822 with the chronological age, indicating that the machine learning analysis of global features is sensitive to the age of the subjects. Profiling of the predicted age shows several periods of mild changes, separated by shorter periods of more rapid alterations. The periods with the most rapid changes were around the age of 55, and around the age of 65. The results show that the process of brain aging of is not linear, and exhibit short periods of rapid aging separated by periods of milder change. These results are in agreement with patterns observed in cognitive decline, mental health status, and general human aging, suggesting that brain aging might not be driven solely by accumulation of environmental damage. Code and data used in the experiments are publicly available.
Nonrigid Image Registration in Digital Subtraction Angiography Using Multilevel B-Spline
2013-01-01
We address the problem of motion artifact reduction in digital subtraction angiography (DSA) using image registration techniques. Most of registration algorithms proposed for application in DSA, have been designed for peripheral and cerebral angiography images in which we mainly deal with global rigid motions. These algorithms did not yield good results when applied to coronary angiography images because of complex nonrigid motions that exist in this type of angiography images. Multiresolution and iterative algorithms are proposed to cope with this problem, but these algorithms are associated with high computational cost which makes them not acceptable for real-time clinical applications. In this paper we propose a nonrigid image registration algorithm for coronary angiography images that is significantly faster than multiresolution and iterative blocking methods and outperforms competing algorithms evaluated on the same data sets. This algorithm is based on a sparse set of matched feature point pairs and the elastic registration is performed by means of multilevel B-spline image warping. Experimental results with several clinical data sets demonstrate the effectiveness of our approach. PMID:23971026
Retinal vessel segmentation on SLO image
Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Schuman, Joel S.
2010-01-01
A scanning laser ophthalmoscopy (SLO) image, taken from optical coherence tomography (OCT), usually has lower global/local contrast and more noise compared to the traditional retinal photograph, which makes the vessel segmentation challenging work. A hybrid algorithm is proposed to efficiently solve these problems by fusing several designed methods, taking the advantages of each method and reducing the error measurements. The algorithm has several steps consisting of image preprocessing, thresholding probe and weighted fusing. Four different methods are first designed to transform the SLO image into feature response images by taking different combinations of matched filter, contrast enhancement and mathematical morphology operators. A thresholding probe algorithm is then applied on those response images to obtain four vessel maps. Weighted majority opinion is used to fuse these vessel maps and generate a final vessel map. The experimental results showed that the proposed hybrid algorithm could successfully segment the blood vessels on SLO images, by detecting the major and small vessels and suppressing the noises. The algorithm showed substantial potential in various clinical applications. The use of this method can be also extended to medical image registration based on blood vessel location. PMID:19163149
Visions of Our Planet's Atmosphere, Land & Oceans - ETheater Presentation
NASA Technical Reports Server (NTRS)
Hasler, F.
2000-01-01
The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of ma'gazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using 1 m resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUS, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.
NASA Astrophysics Data System (ADS)
Coelho, L. P.; Colin, S.; Sunagawa, S.; Karsenti, E.; Bork, P.; Pepperkok, R.; de Vargas, C.
2016-02-01
Protists are responsible for much of the diversity in the eukaryotic kingdomand are crucial to several biogeochemical processes of global importance (e.g.,the carbon cycle). Recent global investigations of these organisms have reliedon sequence-based approaches. These methods do not, however, capture thecomplex functional morphology of these organisms nor can they typically capturephenomena such as interactions (except indirectly through statistical means).Direct imaging of these organisms, can therefore provide a valuable complementto sequencing and, when performed quantitatively, provide measures ofstructures and interaction patterns which can then be related back to sequencebased measurements. Towards this end, we developed a framework, environmentalhigh-content fluorescence microscopy (e-HCFM) which can be applied toenvironmental samples composed of mixed communities. This strategy is based ongeneral purposes dyes that stain major structures in eukaryotes. Samples areimaged using scanning confocal microscopy, resulting in a three-dimensionalimage-stack. High-throughput can be achieved using automated microscopy andcomputational analysis. Standard bioimage informatics segmentation methodscombined with feature computation and machine learning results in automatictaxonomic assignments to the objects that are imaged in addition to severalbiochemically relevant measurements (such as biovolumes, fluorescenceestimates) per organism. We provide results on 174 image acquisition from TaraOcean samples, which cover organisms from 5 to 180 microns (82 samples in the5-20 fraction, 96 in the 20-180 fraction). We show a validation of the approachboth on technical grounds (demonstrating the high accuracy of automatedclassification) and provide results obtain from image analysis and fromintegrating with other data, such as associated environmental parametersmeasured in situ as well as perspectives on integration with sequenceinformation.
A Presentation of Spectracular Visualizations
NASA Technical Reports Server (NTRS)
Hasler, Fritz; Pierce, Hal; Einaudi, Franco (Technical Monitor)
2000-01-01
The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using I m resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.
NASA/NOAA/AMS Earth Science Electronic Theatre
NASA Technical Reports Server (NTRS)
Hasler, Fritz; Pierce, Hal; Einaudi, Franco (Technical Monitor)
2001-01-01
The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat 7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using 1 m resolution spy-satellite technology from the Space Imaging IKONOS satellite, Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.
NASA Technical Reports Server (NTRS)
Hasler, Fritz; Pierce, Hal; Einaudi, Franco (Technical Monitor)
2000-01-01
The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using 1 m resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortices and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.
NASA Astrophysics Data System (ADS)
Mayer, D. P.; Kite, E. S.
2016-12-01
Sandblasting, aeolian infilling, and wind deflation all obliterate impact craters on Mars, complicating the use of crater counts for chronology, particularly on sedimentary rock surfaces. However, crater counts on sedimentary rocks can be exploited to constrain wind erosion rates. Relatively small, shallow craters are preferentially obliterated as a landscape undergoes erosion, so the size-frequency distribution of impact craters in a landscape undergoing steady exhumation will develop a shallower power-law slope than a simple production function. Estimating erosion rates is important for several reasons: (1) Wind erosion is a source of mass for the global dust cycle, so the global dust reservoir will disproportionately sample fast-eroding regions; (2) The pace and pattern of recent wind erosion is a sorely-needed constraint on models of the sculpting of Mars' sedimentary-rock mounds; (3) Near-surface complex organic matter on Mars is destroyed by radiation in <108 years, so high rates of surface exhumation are required for preservation of near-surface organic matter. We use crater counts from 18 HiRISE images over sedimentary rock deposits as the basis for estimating erosion rates. Each image was counted by ≥3 analysts and only features agreed on by ≥2 analysts were included in the erosion rate estimation. Erosion rates range from 0.1-0.2 {μ }m/yr across all images. These rates represent an upper limit on surface erosion by landscape lowering. At the conference we will discuss the within and between-image variability of erosion rates and their implications for recent geological processes on Mars.
Geology and insolation-driven climatic history of Amazonian north polar materials on Mars
Tanaka, K.L.
2005-01-01
Mariner 9 and Viking spacecraft images revealed that the polar regions of Mars, like those of Earth, record the planet's climate history. However, fundamental uncertainties regarding the materials, features, ages and processes constituting the geologic record remained. Recently acquired Mars Orbiter Laser Altimeter data and Mars Orbiter Camera high-resolution images from the Mars Global Surveyor spacecraft and moderately high-resolution Thermal Emission Imaging System visible images from the Mars Odyssey spacecraft permit more comprehensive geologic and climatic analyses. Here I map and show the history of geologic materials and features in the north polar region that span the Amazonian period (???3.0 Gyr ago to present). Erosion and redeposition of putative circumpolar mud volcano deposits (formed by eruption of liquefied, fine-grained material) led to the formation of an Early Amazonian polar plateau consisting of dark layered materials. Crater ejecta superposed on pedestals indicate that a thin mantle was present during most of the Amazonian, suggesting generally higher obliquity and insolation conditions at the poles than at present. Brighter polar layered deposits rest unconformably on the dark layers and formed mainly during lower obliquity over the past 4-5 Myr (ref. 20). Finally, the uppermost layers post-date the latest downtrend in obliquity <20,000 years ago. ?? 2005 Nature Publishing Group.
Geology and insolation-driven climatic history of Amazonian north polar materials on Mars.
Tanaka, Kenneth L
2005-10-13
Mariner 9 and Viking spacecraft images revealed that the polar regions of Mars, like those of Earth, record the planet's climate history. However, fundamental uncertainties regarding the materials, features, ages and processes constituting the geologic record remained. Recently acquired Mars Orbiter Laser Altimeter data and Mars Orbiter Camera high-resolution images from the Mars Global Surveyor spacecraft and moderately high-resolution Thermal Emission Imaging System visible images from the Mars Odyssey spacecraft permit more comprehensive geologic and climatic analyses. Here I map and show the history of geologic materials and features in the north polar region that span the Amazonian period (approximately 3.0 Gyr ago to present). Erosion and redeposition of putative circumpolar mud volcano deposits (formed by eruption of liquefied, fine-grained material) led to the formation of an Early Amazonian polar plateau consisting of dark layered materials. Crater ejecta superposed on pedestals indicate that a thin mantle was present during most of the Amazonian, suggesting generally higher obliquity and insolation conditions at the poles than at present. Brighter polar layered deposits rest unconformably on the dark layers and formed mainly during lower obliquity over the past 4-5 Myr (ref. 20). Finally, the uppermost layers post-date the latest downtrend in obliquity <20,000 years ago.
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
Evidence for recent groundwater seepage and surface runoff on Mars.
Malin, M C; Edgett, K S
2000-06-30
Relatively young landforms on Mars, seen in high-resolution images acquired by the Mars Global Surveyor Mars Orbiter Camera since March 1999, suggest the presence of sources of liquid water at shallow depths beneath the martian surface. Found at middle and high martian latitudes (particularly in the southern hemisphere), gullies within the walls of a very small number of impact craters, south polar pits, and two of the larger martian valleys display geomorphic features that can be explained by processes associated with groundwater seepage and surface runoff. The relative youth of the landforms is indicated by the superposition of the gullies on otherwise geologically young surfaces and by the absence of superimposed landforms or cross-cutting features, including impact craters, small polygons, and eolian dunes. The limited size and geographic distribution of the features argue for constrained source reservoirs.
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.
Phelan-McDermid syndrome presenting with developmental delays and facial dysmorphisms.
Kim, Yoon-Myung; Choi, In-Hee; Kim, Jun Suk; Kim, Ja Hye; Cho, Ja Hyang; Lee, Beom Hee; Kim, Gu-Hwan; Choi, Jin-Ho; Seo, Eul-Ju; Yoo, Han-Wook
2016-11-01
Phelan-McDermid syndrome is a rare genetic disorder caused by the terminal or interstitial deletion of the chromosome 22q13.3. Patients with this syndrome usually have global developmental delay, hypotonia, and speech delays. Several putative genes such as the SHANK3 , RAB , RABL2B , and IB2 are responsible for the neurological features. This study describes the clinical features and outcomes of Korean patients with Phelan-McDermid syndrome. Two patients showing global developmental delay, hypotonia, and speech delay were diagnosed with Phelan-McDermid syndrome via chromosome analysis, fluorescent in situ hybridization, and multiplex ligation-dependent probe amplification analysis. Brain magnetic resonance imaging of Patients 1 and 2 showed delayed myelination and severe communicating hydrocephalus, respectively. Electroencephalography in patient 2 showed high amplitude spike discharges from the left frontotemporoparietal area, but neither patient developed seizures. Kidney ultrasonography of both the patients revealed multicystic kidney disease and pelviectasis, respectively. Patient 2 experienced recurrent respiratory infections, and chest computed tomography findings demonstrated laryngotracheomalacia and bronchial narrowing. He subsequently died because of heart failure after a ventriculoperitoneal shunt operation at 5 months of age. Patient 1, who is currently 20 months old, has been undergoing rehabilitation therapy. However, global developmental delay was noted, as determines using the Korean Infant and Child Development test, the Denver developmental test, and the Bayley developmental test. This report describes the clinical features, outcomes, and molecular genetic characteristics of two Korean patients with Phelan-McDermid syndrome.
Global Precipitation at One-Degree Daily Resolution From Multi-Satellite Observations
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Morrissey, Mark M.; Curtis, Scott; Joyce, Robert; McGavock, Brad; Susskind, Joel
2000-01-01
The One-Degree Daily (1DD) technique is described for producing globally complete daily estimates of precipitation on a 1 deg x 1 deg lat/long grid from currently available observational data. Where possible (40 deg N-40 deg S), the Threshold-Matched Precipitation Index (TMPI) provides precipitation estimates in which the 3-hourly infrared brightness temperatures (IR T(sub b)) are thresholded and all "cold" pixels are given a single precipitation rate. This approach is an adaptation of the Geostationary Operational Environmental Satellite (GOES) Precipitation Index (GPI), but for the TMPI the IR Tb threshold and conditional rain rate are set locally by month from Special Sensor Microwave/Imager (SSM/I)-based precipitation frequency and the Global Precipitation Climatology Project (GPCP) satellite-gauge (SG) combined monthly precipitation estimate, respectively. At higher latitudes the 1DD features a rescaled daily Television Infrared Observation Satellite (TIROS) Operational Vertical Sounder (TOVS) precipitation. The frequency of rain days in the TOVS is scaled down to match that in the TMPI at the data boundaries, and the resulting non-zero TOVS values are scaled locally to sum to the SG (which is a globally complete monthly product). The time series of the daily 1DD global images shows good continuity in time and across the data boundaries. Various examples are shown to illustrate uses. Validation for individual grid -box values shows a very high root-mean-square error but, it improves quickly when users perform time/space averaging according to their own requirements.
ERIC Educational Resources Information Center
Helton, William S.; Hayrynen, Lauren; Schaeffer, David
2009-01-01
Vision researchers have investigated the differences between global and local feature perception. No one has, however, examined the role of global and local feature discrimination in sustained attention tasks. In this experiment participants performed a sustained attention task requiring either global or local letter target discriminations or…
NASA Technical Reports Server (NTRS)
2002-01-01
(Released 24 May 2002) The Science This image is of a portion of Maunder Crater located at about 49 S and 358 W (2 E). There are a number of interesting features in this image. The lower left portion of the image shows a series of barchan dunes that are traveling from right to left. The sand does not always form dunes as can be seen in the dark and diffuse areas surrounding the dune field. The other interesting item in this image are the gullies that can be seen streaming down from just beneath a number of sharp ridgelines in the upper portion of the image. These gullies were first seen by the MOC camera on the MGS spacecraft and it is though that they formed by groundwater leaking out of the rock layers on the walls of craters. The water runs down the slope and forms the fluvial features seen in the image. Other researchers think that these features could be formed by other fluids, such as CO2. These features are typically seen on south facing slopes in the southern hemisphere, though this image has gullies on north facing slopes as well. The Story Little black squigglies seem to worm their way down the left-hand side of this image. These land features are called barchan (crescent-shaped) dunes. Barchan dunes are found in sandy deserts on Earth, so it's no surprise the Martian wind makes them a common site on the red planet too. They were first named by a Russian scientist named Alexander von Middendorf, who studied the inland desert dunes of Turkistan. The barchan dunes in this image occur in the basin of Maunder crater on Mars, and are traveling from right to left. The sand does not always form dunes, though, as can be seen in the dark areas of scattered sand surrounding the dune field. Look for the streaming gullies that appear just beneath a number of sharp ridgelines in the upper portion of the image. These gullies were first discovered by the Mars Orbital Camera on the Mars Global Surveyor spacecraft. While most crater gullies are found on south-facing slopes in the southern hemisphere of Mars, you can see from this image that they occur on north-facing slopes as well. Comparing where gullies appear will help scientists understand more about the conditions under which they form. Some researchers are really excited about gullies on Mars, because they believe these surface tracings might be signs that groundwater has leaked out of the rock layers on the walls of craters. If that's true, the water runs down the slope and forms the flow-like features seen in the image. Scientists can get into some really hot debates, however. Other researchers think that these features could be formed by other fluids, such as carbon dioxide. No one knows for sure, so a lot of heads will be studiously bent over these images, continuing to study them closely. The neat thing about science is that the way you get closer to the truth is to hypothesize and then test, test, and test again. Debate for scientists is seen as an essential means of making sure that no wrong assumptions are made or that no important factor is left out. It's what keeps the field interesting and dynamic . . . and sometimes quite loud and entertaining!
Torres, Ulysses S; Portela-Oliveira, Eduardo; Braga, Fernanda Del Campo Braojos; Werner, Heron; Daltro, Pedro Augusto Nascimento; Souza, Antônio Soares
2015-12-01
Ventral body wall defects (VBWDs) are one of the main categories of human congenital malformations, representing a wide and heterogeneous group of defects sharing a common feature, that is, herniation of one or more viscera through a defect in the anterior body wall. Gastroschisis and omphalocele are the 2 most common congenital VBWDs. Other uncommon anomalies include ectopia cordis and pentalogy of Cantrell, limb-body wall complex, and bladder and cloacal exstrophy. Although VBWDs are associated with multiple abnormalities with distinct embryological origins and that may affect virtually any system organs, at least in relation to anterior body wall defects, they are thought (except for omphalocele) to share a common embryologic mechanism, that is, a failure involving the lateral body wall folds responsible for closing the thoracic, abdominal, and pelvic portions of the ventral body wall during the fourth week of development. Additionally, many of the principles of diagnosis and management are similar for these conditions. Fetal ultrasound (US) in prenatal care allows the diagnosis of most of such defects with subsequent opportunities for parental counseling and optimal perinatal management. Fetal magnetic resonance imaging may be an adjunct to US, providing global and detailed anatomical information, assessing the extent of defects, and also helping to confirm the diagnosis in equivocal cases. Prenatal imaging features of VBWDs may be complex and challenging, often requiring from the radiologist a high level of suspicion and familiarity with the imaging patterns. Because an appropriate management is dependent on an accurate diagnosis and assessment of defects, radiologists should be able to recognize and distinguish between the different VBWDs and their associated anomalies. In this article, we review the relevant embryology of VBWDs to facilitate understanding of the pathologic anatomy and diagnostic imaging approach. Features will be illustrated with prenatal US and magnetic resonance imaging and correlated with postnatal and clinical imaging. Copyright © 2015 Elsevier Inc. All rights reserved.
Exhuming Crater in Northeast Arabia
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-563, 3 December 2003
The upper crust of Mars is layered, and interbedded with these layers are old, filled and buried meteor impact craters. In a few places on Mars, such as Arabia Terra, erosion has re-exposed some of the filled and buried craters. This October 2003 Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows an example. The larger circular feature was once a meteor crater. It was filled with sediment, then buried beneath younger rocks. The smaller circular feature is a younger impact crater that formed in the surface above the rocks that buried the large crater. Later, erosion removed all of the material that covered the larger, buried crater, except in the location of the small crater. This pair of martian landforms is located near 17.6oN, 312.8oW. The image covers an area 3 km (1.9 mi) wide and is illuminated from the lower left.Use of MODIS Cloud Top Pressure to Improve Assimilation Yields of AIRS Radiances in GSI
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi
2014-01-01
Radiances from hyperspectral sounders such as the Atmospheric Infrared Sounder (AIRS) are routinely assimilated both globally and regionally in operational numerical weather prediction (NWP) systems using the Gridpoint Statistical Interpolation (GSI) data assimilation system. However, only thinned, cloud-free radiances from a 281-channel subset are used, so the overall percentage of these observations that are assimilated is somewhere on the order of 5%. Cloud checks are performed within GSI to determine which channels peak above cloud top; inaccuracies may lead to less assimilated radiances or introduction of biases from cloud-contaminated radiances.Relatively large footprint from AIRS may not optimally represent small-scale cloud features that might be better resolved by higher-resolution imagers like the Moderate Resolution Imaging Spectroradiometer (MODIS). Objective of this project is to "swap" the MODIS-derived cloud top pressure (CTP) for that designated by the AIRS-only quality control within GSI to test the hypothesis that better representation of cloud features will result in higher assimilated radiance yields and improved forecasts.
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long
2012-01-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749
NASA Astrophysics Data System (ADS)
Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi
Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.
Texture and color features for tile classification
NASA Astrophysics Data System (ADS)
Baldrich, Ramon; Vanrell, Maria; Villanueva, Juan J.
1999-09-01
In this paper we present the results of a preliminary computer vision system to classify the production of a ceramic tile industry. We focus on the classification of a specific type of tiles whose production can be affected by external factors, such as humidity, temperature, origin of clays and pigments. Variations on these uncontrolled factors provoke small differences in the color and the texture of the tiles that force to classify all the production. A constant and non- subjective classification would allow avoiding devolution from customers and unnecessary stock fragmentation. The aim of this work is to simulate the human behavior on this classification task by extracting a set of features from tile images. These features are induced by definitions from experts. To compute them we need to mix color and texture information and to define global and local measures. In this work, we do not seek a general texture-color representation, we only deal with textures formed by non-oriented colored-blobs randomly distributed. New samples are classified using Discriminant Analysis functions derived from known class tile samples. The last part of the paper is devoted to explain the correction of acquired images in order to avoid time and geometry illumination changes.
The spectrum of dermatoscopic patterns in blue nevi.
Di Cesare, Antonella; Sera, Francesco; Gulia, Andrea; Coletti, Gino; Micantonio, Tamara; Fargnoli, Maria Concetta; Peris, Ketty
2012-08-01
Blue nevi are congenital or acquired, dermal dendritic melanocytic proliferations that can simulate melanocytic and nonmelanocytic lesions including melanoma, cutaneous metastasis of melanoma, Spitz/Reed nevi, and basal cell carcinoma. We sought to investigate global and local dermatoscopic patterns of blue nevi compared with melanomas and basal cell carcinomas. We retrospectively analyzed global and local features in 95 dermatoscopic images of blue nevi and in 190 melanomas and basal cell carcinomas that were selected as control lesions on the basis of similar pigmentation. Lesion pigmentation was classified as monochromatic, dichromatic, or multichromatic. A global pattern characterized by homogeneous pigmentation was observed in all of 95 (100%) blue nevi. Eighty of 95 (84.2%) blue nevi presented a homogeneous pattern consisting of one color (blue, black, or brown) or two colors (blue-brown, blue-gray, or blue-black). Fifteen of 95 (15.8%) blue nevi had a multichromatic (blue, gray, black, brown, and/or red) pigmentation. In all, 47 of 95 (49.5%) blue nevi were characterized by pigmentation in the absence of pigment network or any other local dermatoscopic features. And 48 of 95 (50.5%) blue nevi showed local dermatoscopic patterns including whitish scarlike depigmentation, dots/globules, vascular pattern, streaks, and networklike pattern. The study was retrospective and involved only Caucasian people of Italian origin. The characteristic feature of blue nevi is a homogeneous pigmentation that is blue, blue-gray, blue-brown, or blue-black. We showed that a wide spectrum of local dermatoscopic features (whitish scarlike depigmentation, dots/globules, peripheral streaks or vessels) may also be present. In such cases, clinical and dermatoscopic distinction from melanoma or nonmelanocytic lesions may be difficult or impossible, and surgical excision is necessary. Copyright © 2011 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Adler, Robert F.; Curtis, Scott; Huffman, George; Bolvin, David; Nelkin, Eric; Einaudi, Franco (Technical Monitor)
2001-01-01
This paper gives an overview of the analysis of global precipitation over the last few decades and the impact of the new TRMM precipitation observations. The 20+ year, monthly, globally complete precipitation analysis of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP) is used to study global and regional variations and trends and is compared to the much shorter TRMM(Tropical Rainfall Measuring Mission) tropical data set. The GPCP data set shows no significant trend in global precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. The global trend analysis must be interpreted carefully, however, because the inhomogeneity of the data set makes detecting a small signal very difficult, especially over this relatively short period. The relation of global (and tropical) total precipitation and ENSO events is quantified with no significant signal when land and ocean are combined. Identifying regional trends in precipitation may be more practical. From 1979 to 2000 the tropics have pattern of regional rainfall trends that has an ENSO-like pattern with features of both the El Nino and La Nina. This feature is related to a possible trend in the frequency of ENSO events (either El Nino or La Nina) over the past 20 years. Monthly anomalies of precipitation are related to ENSO variations with clear signals extending into middle and high latitudes of both hemispheres. The El Nino and La Nina mean anomalies are near mirror images of each other and when combined produce an ENSO signal with significant spatial continuity over large distances. A number of the features are shown to extend into high latitudes. Positive anomalies extend in the Southern Hemisphere (S.H.) from the Pacific southeastward across Chile and Argentina into the south Atlantic Ocean. In the Northern Hemisphere (N.H.) the counterpart feature extends across the southern U.S. and Atlantic Ocean into Europe. Further to the west a negative anomaly extends southeastward again from the Maritime Continent across the South Pacific and through the Drake Passage. In the Southern Hemisphere an anomaly feature is shown to spiral into the Antarctica land mass. The extremes of ENSO-related anomalies are also examined and indicate that globally, during both El Nino and La Nina, more extremes of precipitation (both wet and dry) occur than during the "neutral" regime, with the El Nino regime showing larger magnitudes. The distribution is different for the globe as a whole and when the area is restricted to just land. The recent (1998-present) TRMM observations are compared with the similar period of GPCP analyses with very good agreement in terms of pattern and generally good agreement with regard to magnitude. However, there still are differences among the individual TRMM products using passive and active microwave techniques and these need to be resolved before longer-term products such as the GPCP analyses can be validated.
Improved classification accuracy by feature extraction using genetic algorithms
NASA Astrophysics Data System (ADS)
Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.
2003-05-01
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
Painter, David R; Dux, Paul E; Mattingley, Jason B
2015-09-01
When visual attention is set for a particular target feature, such as color or shape, neural responses to that feature are enhanced across the visual field. This global feature-based enhancement is hypothesized to underlie the contingent attentional capture effect, in which task-irrelevant items with the target feature capture spatial attention. In humans, however, different cortical regions have been implicated in global feature-based enhancement and contingent capture. Here, we applied intermittent theta-burst stimulation (iTBS) to assess the causal roles of two regions of extrastriate cortex - right area MT and the right temporoparietal junction (TPJ) - in both global feature-based enhancement and contingent capture. We recorded cortical activity using EEG while participants monitored centrally for targets defined by color and ignored peripheral checkerboards that matched the distractor or target color. In central vision, targets were preceded by colored cues designed to capture attention. Stimuli flickered at unique frequencies, evoking distinct cortical oscillations. Analyses of these oscillations and behavioral performance revealed contingent capture in central vision and global feature-based enhancement in the periphery. Stimulation of right area MT selectively increased global feature-based enhancement, but did not influence contingent attentional capture. By contrast, stimulation of the right TPJ left both processes unaffected. Our results reveal a causal role for the right area MT in feature-based attention, and suggest that global feature-based enhancement does not underlie the contingent capture effect. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
2004-01-01
10 April 2004 Marte Valles is an outflow channel system that straddles 180oW longitude between the region south of Cerberus and far northwestern Amazonis. The floor of the Marte valleys have enigmatic platy flow features that some argue are formed by lava, others suggest they are remnants of mud flows. This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows an island created in the middle of the main Marte Valles channel as fluid---whether lava or mud---flowed past two older meteor impact craters. The craters are located near 21.5oN, 175.3oW. The image covers an area about 3 km (1.9 mi) across. Sunlight illuminates the scene from the lower left.NASA Technical Reports Server (NTRS)
2004-01-01
21 September 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows polygon patterned ground in the south polar region near 82.0oS, 90.8oW. Polygons are fairly common at high latitudes in both martian hemispheres, but they do not occur everywhere. On Earth, features such as these would be good indicators of the presence and freeze-thaw cycles of ground ice. On Mars, the same might (emphasis on might) also be true. This image covers an area approximately 3 km (1.9 mi) across and is illuminated by sunlight from the upper left. Seasonal frost enhances the contrast in the scene; the darkest areas have advanced the farthest in the springtime defrosting process.Natural image classification driven by human brain activity
NASA Astrophysics Data System (ADS)
Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao
2016-03-01
Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.
NASA Technical Reports Server (NTRS)
Didkovsky, L.; Gurman, J. B.
2013-01-01
Solar activity during 2007 - 2009 was very low, causing anomalously low thermospheric density. A comparison of solar extreme ultraviolet (EUV) irradiance in the He II spectral band (26 to 34 nm) from the Solar Extreme ultraviolet Monitor (SEM), one of instruments on the Charge Element and Isotope Analysis System (CELIAS) on board the Solar and Heliospheric Observatory (SOHO) for the two latest solar minima showed a decrease of the absolute irradiance of about 15 +/- 6 % during the solar minimum between Cycles 23 and 24 compared with the Cycle 22/23 minimum when a yearly running-mean filter was used. We found that some local, shorter-term minima including those with the same absolute EUV flux in the SEM spectral band show a higher concentration of spatial power in the global network structure from the 30.4 nm SOHO/Extreme ultraviolet Imaging Telescope (EIT) images for the local minimum of 1996 compared with the minima of 2008 - 2011.We interpret this higher concentration of spatial power in the transition region's global network structure as a larger number of larger-area features on the solar disk. These changes in the global network structure during solar minima may characterize, in part, the geo-effectiveness of the solar He II EUV irradiance in addition to the estimations based on its absolute levels.
NASA Astrophysics Data System (ADS)
Kastens, K. A.; Shipley, T. F.; Boone, A.
2012-12-01
When geoscience experts look at data visualizations, they can "see" structures, and processes and traces of Earth history. When students look at those same visualizations, they may see only blotches of color, dots or squiggles. What are those experts doing, and how can students learn to do the same? We report on a study in which experts (>10 years of geoscience research experience) and novices (undergrad psychology students) examine shaded-relief/color-coded images of topography/bathymetry, while answering questions aloud and being eye-tracked. Images were a global map, two high-res images of continental terrain and two of oceanic terrain, with hi-res localities chosen to display distinctive traces of important earth processes. The differences in what they look at as recorded by eye-tracking are relatively subtle. On the global image, novices tend to focus on continents, whereas experts distribute their attention more evenly across continents and oceans. Experts universally access the available scale information (distance scale, lat/long axes), whereas most students do not. Novices do attend substantially and spontaneously to the salient geomorphological features in the high-res images: seamounts, mid-ocean ridge/transform intersection, erosional river channels, and compressional ridges and valley system. The more marked differences come in what respondents see, as captured in video recordings of their words and gestures in response to experimenter's questions. When their attention is directed to a small and distinctive part of a high-res image and they are asked to "….describe what you see…", experts typically produce richly detailed descriptions that may include the regional depth/altitude, local relief, shape and spatial distribution of major features, symmetry or lack thereof, cross-cutting relationships, presence of lineations and their orientations, and similar geomorphological details. Following or interwoven with these rich descriptions, some experts also offer interpretations of causal Earth processes. We identified four types of novice answers: (a) "flat" answers, in which the student describes the patches of color on the screen with no mention of shape or relief; (b) "thing" answers, in which the student mentions an inappropriate object, such as "the Great Wall of China," (c) geomorphology answers, in which the student talks about depth/altitude, relief, or shapes of landforms, and (d) process answers, in which student talks about earth processes, such as earthquakes, erosion, or plate tectonics. Novice "geomorphology" (c) answers resemble expert responses, but lack the rich descriptive detail. The "process" (d) category includes many interpretations that lack any grounding in the evidentiary base available in the viewed data. These findings suggest that instruction around earth data should include an emphasis on thoroughly and accurately describing the features that are present in the data--a skill that our experts display and our novices mostly lack. It is unclear, though, how best to sequence the teaching of descriptive and interpretive skills, since the experts' attention to empirical features in the data is steered by their knowledge of which features have causal significance.
Global albedos of Pluto and Charon from LORRI New Horizons observations
NASA Astrophysics Data System (ADS)
Buratti, B. J.; Hofgartner, J. D.; Hicks, M. D.; Weaver, H. A.; Stern, S. A.; Momary, T.; Mosher, J. A.; Beyer, R. A.; Verbiscer, A. J.; Zangari, A. M.; Young, L. A.; Lisse, C. M.; Singer, K.; Cheng, A.; Grundy, W.; Ennico, K.; Olkin, C. B.
2017-05-01
The exploration of the Pluto-Charon system by the New Horizons spacecraft represents the first opportunity to understand the distribution of albedo and other photometric properties of the surfaces of objects in the Solar System's ;Third Zone; of distant ice-rich bodies. Images of the entire illuminated surface of Pluto and Charon obtained by the Long Range Reconnaissance Imager (LORRI) camera provide a global map of Pluto that reveals surface albedo variegations larger than any other Solar System world except for Saturn's moon Iapetus. Normal reflectances on Pluto range from 0.08-1.0, and the low-albedo areas of Pluto are darker than any region of Charon. Charon exhibits a much blander surface with normal reflectances ranging from 0.20-0.73. Pluto's albedo features are well-correlated with geologic features, although some exogenous low-albedo dust may be responsible for features seen to the west of the area informally named Tombaugh Regio. The albedo patterns of both Pluto and Charon are latitudinally organized, with the exception of Tombaugh Regio, with darker regions concentrated at the Pluto's equator and Charon's northern pole. The phase curve of Pluto is similar to that of Triton, the large moon of Neptune believed to be a captured Kuiper Belt Object (KBO), while Charon's is similar to that of the Moon. Preliminary Bond albedos are 0.25 ± 0.03 for Charon and 0.72 ± 0.07 for Pluto. Maps of an approximation to the Bond albedo for both Pluto and Charon are presented for the first time. Our work shows a connection between very high albedo (near unity) and planetary activity, a result that suggests the KBO Eris may be currently active.
Jarry, Josée L; Kossert, Amy L
2007-03-01
This study examined the effect of a self-esteem threat combined with exposure to thin images on body image (BI) satisfaction and investment. Female participants (N=94) received a self-esteem threat consisting of false failure feedback or received false success feedback on an intellectual task allegedly highly predictive of academic and professional success. They then viewed media images featuring thin models or products. After viewing thin models, women who had received failure feedback declared themselves more satisfied about their appearance and less invested in it than did women who had received success feedback. These results suggest that exposure to the thin ideal may inspire women experiencing self-esteem threats to use appearance as an alternative source of worth, thus maintaining their global esteem through BI compensatory self-enhancement. Potential long-term implications of this strategy, such as a paradoxical increase in BI investment and the development of eating pathology, are discussed.
NASA Astrophysics Data System (ADS)
Imai, M.; Kouyama, T.; Takahashi, Y.; Watanabe, S.; Yamazaki, A.; Yamada, M.; Nakamura, M.; Satoh, T.; Imamura, T.; Nakaoka, T.; Kawabata, M.; Yamanaka, M.; Kawabata, K. S.
2017-12-01
Venus has a global cloud layer, and the atmosphere rotates with the speed over 100 m/s. The scattering of solar radiance and absorber in clouds cause the strong dark and bright contrast in 365 nm unknown absorption bands. The Japanese Venus orbiter AKATSUKI and the onboard instrument UVI capture 100 km mesoscale cloud features over the entire visible dayside area. In contrast, planetary-scale features are observed when the orbiter is at the moderate distance from Venus and when the Sun-Venus-orbiter phase angle is smaller than 45 deg. Cloud top wind velocity was measured with the mesoscale cloud tracking technique, however, observations of the propagation velocity and its variation of the planetary-scale feature are not well conducted because of the limitation of the observable area. The purpose of the study is measuring the effect of wind acceleration by planetary-scale waves. Each cloud motion can be represented as the wind and phase velocity of the planetary-scale waves, respectively. We conducted simultaneous observations of the zonal motion of both mesoscale and planetary-scale feature using UVI/AKATSUKI and ground-based Pirka and Kanata telescopes in Japan. Our previous ground-based observation revealed the periodicity change of planetary-scale waves with a time scale of a couple of months. For the initial analysis of UVI images, we used the time-consecutive images taken in the orbit #32. During this orbit (from Nov. 13 to 20, 2016), 7 images were obtained with 2 hr time-interval in a day whose spatial resolution ranged from 10-35 km. To investigate the typical mesoscale cloud motion, the Gaussian-filters with sigma = 3 deg. were used to smooth geometrically mapped images with 0.25 deg. resolution. Then the amount of zonal shift for each 5 deg. latitudinal bands between the pairs of two time-consecutive images were estimated by searching the 2D cross-correlation maximum. The final wind velocity (or rotation period) for mesoscale features were determined with a small error about +/- 0.1-day period in equatorial region (Figure 2). The same method will be applied for planetary-scale features captured by UVI, and ground-based observations compensate the discontinuity in UVI data. At the presentation, the variability in winds and wave propagation velocity with the time scale of a couple of months will be shown.
Application of Geostatistical Simulation to Enhance Satellite Image Products
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Dungan, Jennifer L.; Thirulanambi, Rajkumar; Roy, David
2004-01-01
With the deployment of Earth Observing System (EOS) satellites that provide daily, global imagery, there is increasing interest in defining the limitations of the data and derived products due to its coarse spatial resolution. Much of the detail, i.e. small fragments and notches in boundaries, is lost with coarse resolution imagery such as the EOS MODerate-Resolution Imaging Spectroradiometer (MODIS) data. Higher spatial resolution data such as the EOS Advanced Spaceborn Thermal Emission and Reflection Radiometer (ASTER), Landsat and airborne sensor imagery provide more detailed information but are less frequently available. There are, however, both theoretical and analytical evidence that burn scars and other fragmented types of land covers form self-similar or self-affine patterns, that is, patterns that look similar when viewed at widely differing spatial scales. Therefore small features of the patterns should be predictable, at least in a statistical sense, with knowledge about the large features. Recent developments in fractal modeling for characterizing the spatial distribution of undiscovered petroleum deposits are thus applicable to generating simulations of finer resolution satellite image products. We will present example EOS products, analysis to investigate self-similarity, and simulation results.
Bourfiss, Mimount; Vigneault, Davis M; Aliyari Ghasebeh, Mounes; Murray, Brittney; James, Cynthia A; Tichnell, Crystal; Mohamed Hoesein, Firdaus A; Zimmerman, Stefan L; Kamel, Ihab R; Calkins, Hugh; Tandri, Harikrishna; Velthuis, Birgitta K; Bluemke, David A; Te Riele, Anneline S J M
2017-09-01
Regional right ventricular (RV) dysfunction is the hallmark of Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy (ARVD/C), but is currently only qualitatively evaluated in the clinical setting. Feature Tracking Cardiovascular Magnetic Resonance (FT-CMR) is a novel quantitative method that uses cine CMR to calculate strain values. However, most prior FT-CMR studies in ARVD/C have focused on global RV strain using different software methods, complicating implementation of FT-CMR in clinical practice. We aimed to assess the clinical value of global and regional strain using FT-CMR in ARVD/C and to determine differences between commercially available FT-CMR software packages. We analyzed cine CMR images of 110 subjects (39 overt ARVD/C [mutation+/phenotype+], 40 preclinical ARVD/C [mutation+/phenotype-] and 31 control) for global and regional (subtricuspid, anterior, apical) RV strain in the horizontal longitudinal axis using four FT-CMR software methods (Multimodality Tissue Tracking, TomTec, Medis and Circle Cardiovascular Imaging). Intersoftware agreement was assessed using Bland Altman plots. For global strain, all methods showed reduced strain in overt ARVD/C patients compared to control subjects (p < 0.041), whereas none distinguished preclinical from control subjects (p > 0.275). For regional strain, overt ARVD/C patients showed reduced strain compared to control subjects in all segments which reached statistical significance in the subtricuspid region for all software methods (p < 0.037), in the anterior wall for two methods (p < 0.005) and in the apex for one method (p = 0.012). Preclinical subjects showed abnormal subtricuspid strain compared to control subjects using one of the software methods (p = 0.009). Agreement between software methods for absolute strain values was low (Intraclass Correlation Coefficient = 0.373). Despite large intersoftware variability of FT-CMR derived strain values, all four software methods distinguished overt ARVD/C patients from control subjects by both global and subtricuspid strain values. In the subtricuspid region, one software package distinguished preclinical from control subjects, suggesting the potential to identify early ARVD/C prior to overt disease expression.
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
NASA Astrophysics Data System (ADS)
Persson, A.; Connolly, J.
2016-12-01
Peatlands or mires contain about one third of the global terrestrial carbon pool and are located on between 3-6% of the global land area. In boreal and sub-arctic permafrost peatlands the soil organic carbon (SOC) pools are stable and decomposition is suspended only as long as the soil is frozen. Climate warming is projected to be greater in the high latitudes, observed mean annual air temperatures in northern Sweden have increased by 2-3oC since the 1950s. Thawing permafrost leads to new hydrological regimes potentially leading to increased production of methane. In this study, two sets of data were analysed: (i) a stereo-pair of black and white aerial photographs acquired in August 1943 by the Swedish Airforce, with a spatial resolution of 50cm, and (ii) a geo-rectified Worldview2 (WV2) multispectral image acquired on the 24th of July, 2013. The aerial photographs were digitized using a very high resolution camera, georeferenced and incorporated into a geodatabase. The analysis of image areas was performed by heads-up visual interpretation both on a computer monitor and through stereoscopes. The aim was to identify wet and dry areas in the palsa peatland. Feature Analyst (FA) object oriented image analysis (OBIA) was used with the WV2 dataset to extract features that are related to the hydrological state of the mire. Feature Analyst is an extension to ArcGIS. The method uses a black box algorithm that can be adjusted with several parameters to aid classification and feature extraction in an image. Previous studies that analysed aerial photographs from 1970 and 2000 showed that there was an increase in the amount of wet areas on the Swedish palsa bog mire Stordalen. In this study we determine the change in wet areas over a seventy-year period. The central part of the palsa mire has been extensively studied as it has been presumed that it has collapsed due to warmer temperatures in recent decades. However, our analysis shows that much of the internal hydrological patterns on this part of the palsa bog seem to be temporally stable, at least since 1943. Macro changes not identified in previous studies are observed here where it can be seen that the extent of the palsa has retreated, in areas contiguous to streamflow, possibly in response to contact with relatively warmer streamflow.
Global geomorphology: Report of Working Group Number 1
NASA Technical Reports Server (NTRS)
Douglas, I.
1985-01-01
Remote sensing was considered invaluable for seeing landforms in their regional context and in relationship to each other. Sequential images, such as those available from LANDSAT orbits provide a means of detecting landform change and the operation of large scale processes, such as major floods in semiarid regions. The use of remote sensing falls into two broad stages: (1) the characterization or accurate description of the features of the Earth's surface; and (2) the study of landform evolution. Recommendations for future research are made.
1995-01-20
Range : 1.4 to 2 million miles This series of pictures shows four views of the planet Venus obtained by Galileo's Solid State Imaging System.The pictures in the top row were taken about 4 & 5 days after closest approach; those in the bottom row were taken about 6 days out, 2 hours apart. In these violet-light images, north is at the top and the evening terminator to the left. The cloud features high in the planet's atmosphere rotate from right to left, from the limb through the noon meridian toward the terminator, travelling all the way around the planet once every four days. The motion can be seen by comoparing the last two pictures, taken two hours apart. The other views show entirely different faces of Venus. These photos are part of the 'Venus global circulation' sequence planned by the imaging team.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Starr, David (Technical Monitor)
2002-01-01
Spectacular Visualizations of our Blue Marble The NASA/NOAA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to the 2002 Winter Olympic Stadium Site of the Olympic Opening and Closing Ceremonies in Salt Lake City. Fly in and through Olympic Alpine Venues using 1 m IKONOS "Spy Satellite" data. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and international global satellite weather movies including hurricanes & "tornadoes". See the latest visualizations of spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, Landsat 7 including new 1 - min GOES rapid scan image sequences of Nov 9th 2001 Midwest tornadic thunderstorms and have them explained. See how High-Definition Television (HDTV) is revolutionizing the way we communicate science. (In cooperation with the American Museum of Natural History in NYC). See dust storms in Africa and smoke plumes from fires in Mexico. See visualizations featured on the covers of Newsweek, TIME, National Geographic, Popular Science & on National & International Network TV. New computer software tools allow us to roam & zoom through massive global images e.g. Landsat tours of the US, and Africa, showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the polar ice packs and the motion of gigantic Antarctic Icebergs from SeaWinds data. Spectacular new visualizations of the global atmosphere & oceans are shown. See vertexes and currents in the global oceans that bring up the nutrients to feed tiny algae and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nicola Nina climate changes. See the city lights, fishing fleets, gas flares and biomass burning of the Earth at night observed by the "night-vision" DMSP military satellite.
Visions of our Planet's Atmosphere, Land and Oceans: NASA/NOAA Electronic Theater 2002
NASA Technical Reports Server (NTRS)
Haser, Fritz; Starr, David (Technical Monitor)
2002-01-01
The NASA/NOAA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to the 2002 Winter Olympic Stadium Site of the Olympic Opening and Closing Ceremonies in Salt Lake City. Fly in and through Olympic Alpine Venues using 1 m IKONOS "Spy Satellite" data. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and international global satellite weather movies including hurricanes and "tornadoes". See the latest visualizations of spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, Landsat 7 including new 1 - min GOES rapid scan image sequences of Nov 9th 2001 Midwest tornadic thunderstorms and have them explained. See how High-Definition Television (HDTV) is revolutionizing the way we communicate science. (In cooperation with the American Museum of Natural History in NYC) See dust storms in Africa and smoke plumes from fires in Mexico. See visualizations featured on the covers of Newsweek, TIME, National Geographic, Popular Science and on National and International Network TV. New computer software tools allow us to roam and zoom through massive global images e.g. Landsat tours of the US, and Africa, showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the polar ice packs and the motion of gigantic Antarctic Icebergs from SeaWinds. data. Spectacular new visualizations of the global atmosphere and oceans are shown. See vortexes and currents in the global oceans that bring up the nutrients to feed tiny algae and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. See the city lights, fishing fleets, gas flares and bio-mass burning of the Earth at night observed by the "night-vision" DMSP military satellite.
Age-related functional brain changes in young children.
Long, Xiangyu; Benischek, Alina; Dewey, Deborah; Lebel, Catherine
2017-07-15
Brain function and structure change significantly during the toddler and preschool years. However, most studies focus on older or younger children, so the specific nature of these changes is unclear. In the present study, we analyzed 77 functional magnetic resonance imaging datasets from 44 children aged 2-6 years. We extracted measures of both local (amplitude of low frequency fluctuation and regional homogeneity) and global (eigenvector centrality mapping) activity and connectivity, and examined their relationships with age using robust linear correlation analysis and strict control for head motion. Brain areas within the default mode network and the frontoparietal network, such as the middle frontal gyrus, the inferior parietal lobule and the posterior cingulate cortex, showed increases in local and global functional features with age. Several brain areas such as the superior parietal lobule and superior temporal gyrus presented opposite development trajectories of local and global functional features, suggesting a shifting connectivity framework in early childhood. This development of functional connectivity in early childhood likely underlies major advances in cognitive abilities, including language and development of theory of mind. These findings provide important insight into the development patterns of brain function during the preschool years, and lay the foundation for future studies of altered brain development in young children with brain disorders or injury. Copyright © 2017 Elsevier Inc. All rights reserved.
An Hour of Spectacular Visualization
NASA Technical Reports Server (NTRS)
Hasler, Arthur F.
2004-01-01
The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to the Far East and down to Beijing and Bangkok. Zooms through the Cosmos to the site of the 2004 Summer Olympic games in Athens using 1 m IKONOS "Spy Satellite" data. Contrast the 1972 Apollo 17 "Blue Marble" image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, & Landsat 7, of typhoons/hurricanes and fires in California and around the planet. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual greening of the northern hemisphere land masses and Oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere & Oceans are shown. See the currents and vortexes in the Oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fishermen. See the how the ocean blooms in response to El Nino/La Nina climate changes. The Etheater will be presented using the latest High Definition TV (HDTV) and video projection technology on a large screen. See the global city lights, showing population concentrations in the US, Africa, and Asia observed by the "night-vision" DMSP satellite.
NASA/NOAA Electronic Theater: An Hour of Spectacular Visualization
NASA Technical Reports Server (NTRS)
Hasier, A. F.
2004-01-01
The NASA/NOAA Electronic Theater presents Earth science observations and visualizations from space in a historical perspective. Fly in from outer space to Utah, Logan and the USU Agriculture Station. Compare zooms through the Cosmos to the sites of the 2004 Summer and 2002 Winter Olympic games using 1 m IKONOS "Spy Satellite" data. Contrast the 1972 Apollo 17 "Blue Marble" image of the Earth with the latest US and International global satellite images that allow us to view our Planet from any vantage point. See the latest spectacular images h m NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiF!3,& Landsat 7, of storms & fires like Hurricanes Charlie & Isabel and the LA/San Diego Fire Storms of 2003. See how High Definition Television (HDTV) is revolutionizing the way we do science communication. Take the pulse of the planet on a daily, annual and 30-year time scale. See daily thunderstorms, the annual greening of the northern hemisphere land masses and oceans, fires in Africa, dust storms in Iraq, and carbon monoxide exhaust from global burning. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. Spectacular new global visualizations of the observed and simulated atmosphere & oceans are shown. See the currents and vortexes in the oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fishermen. See the how the Ocean blooms in response to El Nino/La Nina climate changes. The E-theater will be presented using the latest High Definition TV and video projection technology on a large screen. See the global city lights, and the great NE US blackout of August 2003 observed by the "night-vision" DMSP satellite.
Unsupervised feature learning for autonomous rock image classification
NASA Astrophysics Data System (ADS)
Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond
2017-09-01
Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.
NASA Astrophysics Data System (ADS)
Le Mouelic, S.; Robidel, R.; Rousseau, B.; Rodriguez, S.; Cornet, T.; Sotin, C.; Barnes, J. W.; Brown, R. H.; Buratti, B. J.; Baines, K. H.; Clark, R. N.; Nicholson, P. D.
2017-12-01
Cassini entered in Saturn's orbit in July 2004. In thirteen years, 127 targeted flybys of Titan have been performed. We focus our study on the analysis of the complete Visual and Infrared Mapping Spectrometer data set, with a particular emphasis on the evolving features on both poles. We have computed individual global maps of the north and south poles for each of the 127 targeted flybys, using VIMS wavelengths sensitive both to clouds and surface features. First evidences for a vast ethane cloud covering the North Pole is seen as soon as the first and second targeted flyby in October 2004 and December 2005 [1]. The first detailed imaging of this north polar feature with VIMS was obtained in December 2006, thanks to a change in inclination of the spacecraft orbit [2]. At this time, the northern lakes and seas of Titan were totally masked to the optical instruments by the haze and clouds, whereas the southern pole was well illuminated and mostly clear of haze and vast clouds. The vast north polar feature progressively vanished around the equinox in 2009 [2,3,4], in agreement with the predictions of Global Circulation Models [5]. It revealed progressively the underlying lakes to the ISS and VIMS instruments, which show up very nicely in VIMS in a series of flybys between T90 and T100. First evidences of an atmospheric vortex growing over the south pole occurred in May 2012 (T82), with a high altitude feature being detected consistently at each flyby up to the last T126 targeted flyby, and also appearing in more distant observations up to the end of the Cassini mission. Cassini has covered almost half a titanian year, corresponding to two seasons. The situation observed at the South Pole in the last images may correspond to what was observed in the north as Cassini just arrived. [1] Griffith et al., Science, 2006. [2] Le Mouélic et al., PSS, 2012. [3] Rodriguez et al., Nature, 2009. [4] Rodriguez et al., Icarus 2011. [4] Hirtzig et al., Icarus, 2013. [5] Rannou et al., Science 2005
Improving Large-Scale Image Retrieval Through Robust Aggregation of Local Descriptors.
Husain, Syed Sameed; Bober, Miroslaw
2017-09-01
Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
The Martian Prime Meridian -- Longitude "Zero"
2001-02-08
On Earth, the longitude of the Royal Observatory in Greenwich, England is defined as the "prime meridian," or the zero point of longitude. Locations on Earth are measured in degrees east or west from this position. The prime meridian was defined by international agreement in 1884 as the position of the large "transit circle," a telescope in the Observatory's Meridian Building. The transit circle was built by Sir George Biddell Airy, the 7th Astronomer Royal, in 1850. (While visual observations with transits were the basis of navigation until the space age, it is interesting to note that the current definition of the prime meridian is in reference to orbiting satellites and Very Long Baseline Interferometry (VLBI) measurements of distant radio sources such as quasars. This "International Reference Meridian" is now about 100 meters east of the Airy Transit at Greenwich.) For Mars, the prime meridian was first defined by the German astronomers W. Beer and J. H. Mädler in 1830-32. They used a small circular feature, which they designated "a," as a reference point to determine the rotation period of the planet. The Italian astronomer G. V. Schiaparelli, in his 1877 map of Mars, used this feature as the zero point of longitude. It was subsequently named Sinus Meridiani ("Middle Bay") by Camille Flammarion. When Mariner 9 mapped the planet at about 1 kilometer (0.62 mile) resolution in 1972, an extensive "control net" of locations was computed by Merton Davies of the RAND Corporation. Davies designated a 0.5-kilometer-wide crater (0.3 miles wide), subsequently named "Airy-0" (within the large crater Airy in Sinus Meridiani) as the longitude zero point. (Airy, of course, was named to commemorate the builder of the Greenwich transit.) This crater was imaged once by Mariner 9 (the 3rd picture taken on its 533rd orbit, 533B03) and once by the Viking 1 orbiter in 1978 (the 46th image on that spacecraft's 746th orbit, 746A46), and these two images were the basis of the martian longitude system for the rest of the 20th Century. The Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) has attempted to take a picture of Airy-0 on every close overflight since the beginning of the MGS mapping mission. It is a measure of the difficulty of hitting such a small target that nine attempts were required, since the spacecraft did not pass directly over Airy-0 until almost the end of the MGS primary mission, on orbit 8280 (January 13, 2001). In the left figure above, the outlines of the Mariner 9, Viking, and Mars Global Surveyor images are shown on a MOC wide angle context image, M23-00924. In the right figure, sections of each of the three images showing the crater Airy-0 are presented. A is a piece of the Mariner 9 image, B is from the Viking image, and C is from the MGS image. Airy-0 is the larger crater toward the top-center in each frame. The MOC observations of Airy-0 not only provide a detailed geological close-up of this historic reference feature, they will be used to improve our knowledge of the locations of all features on Mars, which will in turn enable more precise landings on the Red Planet by future spacecraft and explorers. http://photojournal.jpl.nasa.gov/catalog/PIA03207
NASA Technical Reports Server (NTRS)
Heldmann, J. L.; Toon, O. B.; Pollard, W. H.; Mellon, M. T.; Pitlick, J.; McKay, C. P.; Andersen, D. T.
2005-01-01
Images from the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft show geologically young small-scale features resembling terrestrial water-carved gullies. An improved understanding of these features has the potential to reveal important information about the hydrological system on Mars, which is of general interest to the planetary science community as well as the field of astrobiology and the search for life on Mars. The young geologic age of these gullies is often thought to be a paradox because liquid water is unstable at the Martian surface. Current temperatures and pressures are generally below the triple point of water (273 K, 6.1 mbar) so that liquid water will spontaneously boil and/or freeze. We therefore examine the flow of water on Mars to determine what conditions are consistent with the observed features of the gullies.
Discriminative region extraction and feature selection based on the combination of SURF and saliency
NASA Astrophysics Data System (ADS)
Deng, Li; Wang, Chunhong; Rao, Changhui
2011-08-01
The objective of this paper is to provide a possible optimization on salient region algorithm, which is extensively used in recognizing and learning object categories. Salient region algorithm owns the superiority of intra-class tolerance, global score of features and automatically prominent scale selection under certain range. However, the major limitation behaves on performance, and that is what we attempt to improve. By reducing the number of pixels involved in saliency calculation, it can be accelerated. We use interest points detected by fast-Hessian, the detector of SURF, as the candidate feature for saliency operation, rather than the whole set in image. This implementation is thereby called Saliency based Optimization over SURF (SOSU for short). Experiment shows that bringing in of such a fast detector significantly speeds up the algorithm. Meanwhile, Robustness of intra-class diversity ensures object recognition accuracy.
Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J
2015-10-01
Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.
Compiling Mercury relief map using several data sources
NASA Astrophysics Data System (ADS)
Zakharova, M.
2015-12-01
There are several data of Mercury topography obtained as the result of processing materials collected by two spacecraft - the Mariner-10 and the MESSENGER during their Mercury flybys.The history of the visual mapping of Mercury begins at the recent times as the first significant observations were made during the latter half of the 20th century, whereas today we have no data with 100% coverage of the entire surface of the Mercury except the global mosaic composed of the images acquired by MESSENGER. The main objective of this work is to provide the first Mercury relief map using all the existing elevation data. The workflow included collecting, combining and processing the existing data and afterwards merging them correctly for one single map compiling. The preference was given to topography data while the global mosaic was used to fill the gaps where there was insufficient topography.The Mercury relief map has been created with the help of four different types of data: - global mosaic with 100% coverage of Mercury's surface created from Messenger orbital images (36% of the final map);- Digital Terrain Models obtained by the treating stereo images made during the Mariner 10's flybys (15% of the map) (Cook and Robinson, 2000);- Digital Terrain Models obtained from images acquired during the Messenger flybys (24% of the map) (F. Preusker et al., 2011);- the data sets produced by the MESSENGER Mercury Laser Altimeter (MLA) (25 % of the map).The final map is created in the Lambert azimuthal Equal area projection and has the scale 1:18 000 000. It represents two hemispheres - western and eastern which are separated by the zero meridian. It mainly shows the hypsometric features of the planet and craters with a diameter more than 200 kilometers.
A Presentation of Spectacular Visualizations
NASA Technical Reports Server (NTRS)
Hasler, Fritz; Einaudi, Franco (Technical Monitor)
2000-01-01
The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes and tornadic thunderstorms. See the latest spectacular images from NASA and the National Oceanic and Atmospheric Administration (NOAA) remote sensing missions like the Geostationary Operational Environmental Satellites (GOES), NOAA, Tropical Rainfall Measuring Mission (TRMM), SeaWiFS, Landsat7, and new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran, and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science, and on National and International Network TV. New Digital Earth visualization tools allow us to roam and zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using one meter resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere and oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.
Image analysis and machine learning for detecting malaria.
Poostchi, Mahdieh; Silamut, Kamolrat; Maude, Richard J; Jaeger, Stefan; Thoma, George
2018-04-01
Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis. Published by Elsevier Inc.
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
Delineation and geometric modeling of road networks
NASA Astrophysics Data System (ADS)
Poullis, Charalambos; You, Suya
In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.
2015-01-01
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069
Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang
2016-07-01
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.
The Science Behind the NASA/NOAA Electronic Theater 2002
NASA Technical Reports Server (NTRS)
Hasler, A. Fritz; Starr, David (Technical Monitor)
2002-01-01
Details of the science stories and scientific results behind the Etheater Earth Science Visualizations from the major remote sensing institutions around the country will be explained. The NASA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Temple Square and the University of Utah Campus. Go back to the early weather satellite images from the 1960s see them contrasted with the latest US/Europe/Japan global weather data. See the latest images and image sequences from NASA & NOAA missions like Terra, GOES, NOAA, TRMM, SeaWiFS, Landsat 7 visualized with state-of-the art tools. A similar retrospective of numerical weather models from the 1960s will be compared with the latest "year 2002" high-resolution models. See the inner workings of a powerful hurricane as it is sliced and dissected using the University of Wisconsin Vis-5D interactive visualization system. The largest super computers are now capable of realistic modeling of the global oceans. See ocean vortexes and currents that bring up the nutrients to feed phitoplankton and zooplankton as well as draw the crill fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate regimes. The Internet and networks have appeared while computers and visualizations have vastly improved over the last 40 years. These advances make it possible to present the broad scope and detailed structure of the huge new observed and simulated datasets in a compelling and instructive manner. New visualization tools allow us to interactively roam & zoom through massive global images larger than 40,000 x 20,000 pixels. Powerful movie players allow us to interactively roam, zoom & loop through 4000 x 4000 pixel bigger than HDTV movies of up to 5000 frames. New 3D tools allow highly interactive manipulation of detailed perspective views of many changing model quantities. See the 1m resolution before and after shots of lower Manhattan and the Pentagon after the September 11 disaster as well as shots of Afghanistan from the Space Imaging IKONOS as well as debris plume images from Terra MODIS and SPOT Image. Shown by the SGI-Octane Graphics-Supercomputer are visualizations of hurricanes Michelle 2001, Floyd, Mitch, Fran and Linda. Our visualizations of these storms have been featured on the covers of the National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA's large collection of High Definition TV (HDTV) visualizations clips New visualizations of a Los Alamos global ocean model, and high-resolution results of a NASA/JPL Atlantic ocean basin model showing currents, and salinity features will be shown. El Nino/La Nina effects on sea surface temperature and sea surface height of the Pacific Ocean will also be shown. The SST simulations will be compared with GOES Gulf Stream animations and ocean productivity observations. Tours will be given of the entire Earth's land surface at 500 m resolution from recently composited Terra MODIS data, Visualizations will be shown from the Earth Science Etheater 2001 recently presented over the last years in New Zealand, Johannesburg, Tokyo, Paris, Munich, Sydney, Melbourne, Honolulu, Washington, New York City, Pasadena, UCAR/Boulder, and Penn State University. The presentation will use a 2-CPU SGI/CRAY Octane Super Graphics workstation with 4 GB RAM and terabyte disk array at 2048 x 768 resolution plus multimedia laptop with three high resolution projectors. Visualizations will also be featured from museum exhibits and presentations including: the Smithsonian Air & Space Museum in Washington, IMAX theater at the Maryland Science Center in Baltimore, the James Lovell Discovery World Science museum in Milwaukee, the American Museum of Natural History (NYC) Hayden Planetarium IMAX theater, etc. The Etheater is sponsored by NASA, NOAA and the American Meteorological Society. This presentation is brought to you by the University of Utah College of Mines and Earth Sciences and, the Utah Museum of Natural History.
New color-shifting security devices
NASA Astrophysics Data System (ADS)
Moia, Franco
2004-06-01
The unbroken global increase of forgery and counterfeiting of valuable documents and products steadily requires improved types of optical security devices. Hence, the "security world" is actively seeking for new features which meet high security standards, look attractively and allow easy recognition. One special smart security device created by ROLIC's technology represents a cholesteric device combined with a phase image. On tilting, such devices reveal strong color shifts which are clearly visible to the naked eye. The additional latent image is invisible under normal lighting conditions but can be revealed to human eyes by means of a simple, commercially available linear sheet polarizer. Based on our earlier work, first published in 1981, we now have developed phase change guest-host devices combined with dye-doped cholesteric material for application in new security features. ROLIC has developed sophisticated material systems of cross-linkable cholesteric liquid crystals and suitable cross-linkable dyes which allow to create outstanding cholesteric color-shifting effects not only on light absorbing dark backgrounds but also on bright or even white backgrounds preserving the circularly polarizing state. The new security devices combine unambiguously 1st and 2nd level inspection features and show brilliant colors on black as well as on white substrates. On tilting, the security devices exhibit remarkable color shifts while the integrated hidden images can be revealed by use of a sheet polarizer. Furthermore, due to its very thin material layers, even demanding applications, such as on banknotes can be considered.
MR PROSTATE SEGMENTATION VIA DISTRIBUTED DISCRIMINATIVE DICTIONARY (DDD) LEARNING.
Guo, Yanrong; Zhan, Yiqiang; Gao, Yaozong; Jiang, Jianguo; Shen, Dinggang
2013-01-01
Segmenting prostate from MR images is important yet challenging. Due to non-Gaussian distribution of prostate appearances in MR images, the popular active appearance model (AAM) has its limited performance. Although the newly developed sparse dictionary learning method[1, 2] can model the image appearance in a non-parametric fashion, the learned dictionaries still lack the discriminative power between prostate and non-prostate tissues, which is critical for accurate prostate segmentation. In this paper, we propose to integrate deformable model with a novel learning scheme, namely the Distributed Discriminative Dictionary ( DDD ) learning, which can capture image appearance in a non-parametric and discriminative fashion. In particular, three strategies are designed to boost the tissue discriminative power of DDD. First , minimum Redundancy Maximum Relevance (mRMR) feature selection is performed to constrain the dictionary learning in a discriminative feature space. Second , linear discriminant analysis (LDA) is employed to assemble residuals from different dictionaries for optimal separation between prostate and non-prostate tissues. Third , instead of learning the global dictionaries, we learn a set of local dictionaries for the local regions (each with small appearance variations) along prostate boundary, thus achieving better tissue differentiation locally. In the application stage, DDDs will provide the appearance cues to robustly drive the deformable model onto the prostate boundary. Experiments on 50 MR prostate images show that our method can yield a Dice Ratio of 88% compared to the manual segmentations, and have 7% improvement over the conventional AAM.
NASA Astrophysics Data System (ADS)
Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam
2017-11-01
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
2018-01-25
See Jupiter's northern polar belt region in this view taken by NASA's Juno spacecraft. This color-enhanced image was taken on Dec. 16, 2017 at 9:47 a.m. PST (12:47 p.m. EST), as Juno performed its tenth close flyby of Jupiter. At the time the image was taken, the spacecraft was about 5,600 miles (8,787 kilometers) from the tops of the clouds of the planet at a latitude of 38.4 degrees north. Citizen scientist Björn Jónsson processed this image using data from the JunoCam imager. This image has been processed from the raw JunoCam framelets by removing the effects of global illumination. Jónsson then increased the contrast and color and sharpened smallscale features. The image has also been cropped. While at first glance the view may appear to be in Jupiter's south, the raw source images were obtained when Juno was above the planet's northern hemisphere looking south, potentially causing a sense of disorientation to the viewer. The geometry of the scene can be explored using the time of the image and the Juno mission module of NASA's Eyes on the Solar System. https://photojournal.jpl.nasa.gov/catalog/PIA21976
Lu, Shen; Xia, Yong; Cai, Tom Weidong; Feng, David Dagan
2015-01-01
Dementia, Alzheimer's disease (AD) in particular is a global problem and big threat to the aging population. An image based computer-aided dementia diagnosis method is needed to providing doctors help during medical image examination. Many machine learning based dementia classification methods using medical imaging have been proposed and most of them achieve accurate results. However, most of these methods make use of supervised learning requiring fully labeled image dataset, which usually is not practical in real clinical environment. Using large amount of unlabeled images can improve the dementia classification performance. In this study we propose a new semi-supervised dementia classification method based on random manifold learning with affinity regularization. Three groups of spatial features are extracted from positron emission tomography (PET) images to construct an unsupervised random forest which is then used to regularize the manifold learning objective function. The proposed method, stat-of-the-art Laplacian support vector machine (LapSVM) and supervised SVM are applied to classify AD and normal controls (NC). The experiment results show that learning with unlabeled images indeed improves the classification performance. And our method outperforms LapSVM on the same dataset.
Cassini Imaging of Iapetus and Solution of the Albedo Asymmetry Enigma
NASA Astrophysics Data System (ADS)
Denk, Tilmann; Spencer, John
2014-05-01
Cassini imaging of Iapetus during one close and several more distant flybys mainly in the first years of the mission revealed an alien and often unique landscape of this third-largest moon in the Saturnian system [1]. The data show numerous impact craters on the bright and dark terrain, equator-facing dark and pole-facing bright crater walls, huge impact basins, rather minor endogenic geologic features, a non-spherical, but ellipsoidal shape, a giant ridge which spans across half of Iapetus' circumference exactly along the equator, a newly detected global 'color dichotomy' presumably formed by dust from retrograde irregular moons, and of course the famous extreme global albedo asymmetry which has been an enigma for more than three centuries. Revealing the cause of this 'albedo dichotomy' enigma of Iapetus, where the trailing side and poles are more than 10x brighter than the leading side, was one of the major tasks for the Cassini mission. It has now been solved successfully. In the mid-1970es, deposition of exogenic dark material on the leading side, originating from outer retrograde moon Phoebe, was proposed as the cause. But this alone could not explain the global shape, sharpness, and complexity of the transition between Iapetus' bright and dark terrain. Mainly with Cassini spectrometer (CIRS) and imaging (ISS) data, all these characteristics and the asymmetry's large amplitude are now plausibly explained by runaway global thermal migration of water ice, triggered by the deposition of dark material on the leading hemisphere. This mechanism is unique to Iapetus among the Saturnian satellites for many reasons. Most important are Iapetus' slow rotation which produces unusually high daytime temperatures and water ice sublimation rates, and the size (gravity) of Iapetus which is small enough for global migration of water ice but large enough that much of the ice is retained on the surface [2]. References: [1] Denk, T., Neukum, G., Roatsch, Th., Porco, C.C., Burns, J.A., Galuba, G.G., Schmedemann, N., Helfenstein, P., Thomas, P.C., Wagner, R.J., West, R.A. (2010): Iapetus: Unique Surface Properties and a Global Color Dichotomy from Cassini Imaging. Science 327, no. 5964, 435-439. [2] Spencer, J.R., Denk, T. (2010): Formation of Iapetus's Extreme Albedo Dichotomy by Exogenically-Triggered Thermal Migration of Water Ice. Science 327, no. 5964, 432-435.
A Year at the Moon on Chandrayaan-1: Moon Mineralogy Mapper Data in a Global Perspective
NASA Astrophysics Data System (ADS)
Boardman, J. W.; Pieters, C. M.; Clark, R. N.; Combe, J.; Green, R. O.; Isaacson, P.; Lundeen, S.; Malaret, E.; McCord, T. B.; Nettles, J. W.; Petro, N. E.; Staid, M.; Varanasi, P.
2009-12-01
The Moon Mineralogy Mapper, M3, a high-fidelity high-resolution imaging spectrometer on Chandrayaan-1 has completed two of its four scheduled optical periods during its maiden year in lunar orbit, collecting over 4.6 billion spectra covering most of the lunar surface. These imaging periods (November 2008-February 2009 and April 2009-August 2009) correspond to times of equatorial solar zenith angle less than sixty degrees, relative to the Chandrayaan-1 orbit. The vast majority of the data collected in these first two optical periods are in Global Mode (85 binned spectral bands from 460 to 2976 nanometers with a 2-by-2 binned angular pixel size of 1.4 milliradians). Full-resolution Target Mode data (259 spectral bands and 0.7 milliradian pixels) will be the focus of the remaining two collection periods. Chandrayaan-1 operated initially in a 100-kilometer polar orbit, yielding 70 meter Target pixels and 140 meter Global pixels. The orbit was raised on May 20, 2009, during Optical Period 2, to a nominal 200 kilometer altitude, effectively doubling the pixel spatial sizes. While the high spatial and spectral resolutions of the data allow detailed examination of specific local areas on the Moon, they can also reveal remarkable features when combined, processed and viewed in a global context. Using preliminary calibration and selenolocation, we have explored the spectral and spatial properties of the Moon as a whole as revealed by M3. The data display striking new diversity and information related to surface mineralogy, distribution of volatiles, thermal processes and photometry. Large volumes of complex imaging spectrometry data are, by their nature, simultaneously information-rich and challenging to process. For an initial assessment of the gross information content of the data set we performed a Principal Components analysis on the entire suite of Global Mode imagery. More than a dozen linearly independent spectral dimensions are present, even at the global scale. An animation of a Grand Tour Projection, sweeping a three-dimensional red/green/blue image visualization window through the M3 hyperdimensional spectral space, confirms both spatially and spectrally that the M3 data will revolutionize our understanding of our nearest celestial neighbor.
NASA Technical Reports Server (NTRS)
2006-01-01
26 May 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a variety of textures observed on a dust-covered plain in the Marte Valles region of Mars. Textural variations across the scene include: areas that are littered with small impact craters, a channel-like feature that is dominated by mounds of a variety of sizes, small ripples and/or ridges, and relatively smooth, unremarkable terrain. The contact between the cratered plain and the area dominated by mounds marks one of the banks along the edge of one of the shallow valleys of the Marte Valles system. Location near: 17.7oN, 175.0oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern SpringNASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-344, 28 April 2003
This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image mosaic was constructed from data acquired by the MOC red wide angle camera. The large, circular feature in the upper left is Aram Chaos, an ancient impact crater filled with layered sedimentary rock that was later disrupted and eroded to form a blocky, 'chaotic' appearance. To the southeast of Aram Chaos, in the lower right of this picture, is Iani Chaos. The light-toned patches amid the large blocks of Iani Chaos are known from higher-resolution MOC images to be layered, sedimentary rock outcrops. The picture center is near 0.5oN, 20oW. Sunlight illuminates the scene from the left/upper left.NASA Technical Reports Server (NTRS)
Williams, David R.; Gaddis, Lisa
1991-01-01
The tectonics of the Tellus Region highland on Venus is examined using the altimetry and gravity data collected by Pioneer Venus, which were incorporated into a thin elastic shell model to calculate both the global (long-wavelength) and the regional (short-wavelength) stresses for various assumed values of crust, lithosphere, and mantle thickness and modes of compensation. The resultant stress fields were compared to the surface morphology observed in the Venera 15/16 radar images and interpreted in terms of stress history of Tellus Regio. The best fitting parameters were found to be consistent with minor amounts of lithospheric flexure being necessary to produce the observed surface features of this region.
Zhang, Ling; Kong, Hui; Ting Chin, Chien; Liu, Shaoxiong; Fan, Xinmin; Wang, Tianfu; Chen, Siping
2014-03-01
Current automation-assisted technologies for screening cervical cancer mainly rely on automated liquid-based cytology slides with proprietary stain. This is not a cost-efficient approach to be utilized in developing countries. In this article, we propose the first automation-assisted system to screen cervical cancer in manual liquid-based cytology (MLBC) slides with hematoxylin and eosin (H&E) stain, which is inexpensive and more applicable in developing countries. This system consists of three main modules: image acquisition, cell segmentation, and cell classification. First, an autofocusing scheme is proposed to find the global maximum of the focus curve by iteratively comparing image qualities of specific locations. On the autofocused images, the multiway graph cut (GC) is performed globally on the a* channel enhanced image to obtain cytoplasm segmentation. The nuclei, especially abnormal nuclei, are robustly segmented by using GC adaptively and locally. Two concave-based approaches are integrated to split the touching nuclei. To classify the segmented cells, features are selected and preprocessed to improve the sensitivity, and contextual and cytoplasm information are introduced to improve the specificity. Experiments on 26 consecutive image stacks demonstrated that the dynamic autofocusing accuracy was 2.06 μm. On 21 cervical cell images with nonideal imaging condition and pathology, our segmentation method achieved a 93% accuracy for cytoplasm, and a 87.3% F-measure for nuclei, both outperformed state of the art works in terms of accuracy. Additional clinical trials showed that both the sensitivity (88.1%) and the specificity (100%) of our system are satisfyingly high. These results proved the feasibility of automation-assisted cervical cancer screening in MLBC slides with H&E stain, which is highly desirable in community health centers and small hospitals. © 2013 International Society for Advancement of Cytometry.
MOC View of Mars98 Landing Zone - 12/24/97
NASA Technical Reports Server (NTRS)
1998-01-01
On 12/24/1997 at shortly after 08:17 UTC SCET, the Mars Global Surveyor Mars Orbiter Camera (MOC) took this high resolution image of a small portion of the potential Mars Surveyor '98 landing zone. For the purposes of planning MOC observations, this zone was defined as 75 +/- 2 degrees S latitude, 215 +/- 15 degrees W longitude. The image ran along the western perimeter of the Mars98 landing zone (e.g., near 245oW longitude). At that longitude, the layered deposits are farther south than at the prime landing longitude. The images were shifted in latitude to fall onto the layered deposits. The location of the image was selected to try to cover a range of possible surface morphologies, reliefs, and albedos.
This image is approximately 81.5 km long by 31 km wide. It covers an area of about 2640 sq. km. The center of the image is at 80.46oS, 243.12 degrees W. The viewing conditions are: emission angle 56.30 degrees, incidence angle 58.88 degrees, phase of 30.31 degrees, and 15.15 meters/pixel resolution. North is to the top of the image.The effects of ground fog, which obscures the surface features(left), has been minimize by filtering (right).Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.MOC View of Mars98 Landing Zone - 12/24/97
NASA Technical Reports Server (NTRS)
1998-01-01
On 12/24/1997 at shortly after 08:17 UTC SCET, the Mars Global Surveyor Mars Orbiter Camera (MOC) took this high resolution image of a small portion of the potential Mars Surveyor '98 landing zone. For the purposes of planning MOC observations, this zone was defined as 75 +/- 2 degrees S latitude, 215 +/- 15 degrees W longitude. The image ran along the western perimeter of the Mars98 landing zone (e.g., near 245oW longitude). At that longitude, the layered deposits are farther south than at the prime landing longitude. The images were shifted in latitude to fall onto the layered deposits. The location of the image was selected to try to cover a range of possible surface morphologies, reliefs, and albedos.
This image is approximately 83.3 km long by 31.7 km wide. It covers an area of about 2750 sq. km. The center of the image is at 81.97 degrees S, 246.74 degrees W. The viewing conditions are: emission angle 58.23 degrees, incidence angle 60.23 degrees, phase of 30.34 degrees, and 15.49 meters/pixel resolution. North is to the top of the image.The effects of ground fog, which obscures the surface features(left), has been minimize by filtering (right).Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter
2016-01-01
Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076
NASA Astrophysics Data System (ADS)
Moores, John E.; Lemmon, Mark T.; Rafkin, Scot C. R.; Francis, Raymond; Pla-Garcia, Jorge; de la Torre Juárez, Manuel; Bean, Keri; Kass, David; Haberle, Robert; Newman, Claire; Mischna, Michael; Vasavada, Ashwin; Rennó, Nilton; Bell, Jim; Calef, Fred; Cantor, Bruce; Mcconnochie, Timothy H.; Harri, Ari-Matti; Genzer, Maria; Wong, Michael; Smith, Michael D.; Javier Martín-Torres, F.; Zorzano, María-Paz; Kemppinen, Osku; McCullough, Emily
2015-05-01
We report on the first 360 sols (LS 150° to 5°), representing just over half a Martian year, of atmospheric monitoring movies acquired using the NavCam imager from the Mars Science Laboratory (MSL) Rover Curiosity. Such movies reveal faint clouds that are difficult to discern in single images. The data set acquired was divided into two different classifications depending upon the orientation and intent of the observation. Up to sol 360, 73 Zenith movies and 79 Supra-Horizon movies have been acquired and time-variable features could be discerned in 25 of each. The data set from MSL is compared to similar observations made by the Surface Stereo Imager (SSI) onboard the Phoenix Lander and suggests a much drier environment at Gale Crater (4.6°S) during this season than was observed in Green Valley (68.2°N) as would be expected based on latitude and the global water cycle. The optical depth of the variable component of clouds seen in images with features are up to 0.047 ± 0.009 with a granularity to the features observed which averages 3.8°. MCS also observes clouds during the same period of comparable optical depth at 30 and 50 km that would suggest a cloud spacing of 2.0 to 3.3 km. Multiple motions visible in atmospheric movies support the presence of two distinct layers of clouds. At Gale Crater, these clouds are likely caused by atmospheric waves given the regular spacing of features observed in many Zenith movies and decreased spacing towards the horizon in sunset movies consistent with clouds forming at a constant elevation. Reanalysis of Phoenix data in the light of the NavCam equatorial dataset suggests that clouds may have been more frequent in the earlier portion of the Phoenix mission than was previously thought.
Case-based fracture image retrieval.
Zhou, Xin; Stern, Richard; Müller, Henning
2012-05-01
Case-based fracture image retrieval can assist surgeons in decisions regarding new cases by supplying visually similar past cases. This tool may guide fracture fixation and management through comparison of long-term outcomes in similar cases. A fracture image database collected over 10 years at the orthopedic service of the University Hospitals of Geneva was used. This database contains 2,690 fracture cases associated with 43 classes (based on the AO/OTA classification). A case-based retrieval engine was developed and evaluated using retrieval precision as a performance metric. Only cases in the same class as the query case are considered as relevant. The scale-invariant feature transform (SIFT) is used for image analysis. Performance evaluation was computed in terms of mean average precision (MAP) and early precision (P10, P30). Retrieval results produced with the GNU image finding tool (GIFT) were used as a baseline. Two sampling strategies were evaluated. One used a dense 40 × 40 pixel grid sampling, and the second one used the standard SIFT features. Based on dense pixel grid sampling, three unsupervised feature selection strategies were introduced to further improve retrieval performance. With dense pixel grid sampling, the image is divided into 1,600 (40 × 40) square blocks. The goal is to emphasize the salient regions (blocks) and ignore irrelevant regions. Regions are considered as important when a high variance of the visual features is found. The first strategy is to calculate the variance of all descriptors on the global database. The second strategy is to calculate the variance of all descriptors for each case. A third strategy is to perform a thumbnail image clustering in a first step and then to calculate the variance for each cluster. Finally, a fusion between a SIFT-based system and GIFT is performed. A first comparison on the selection of sampling strategies using SIFT features shows that dense sampling using a pixel grid (MAP = 0.18) outperformed the SIFT detector-based sampling approach (MAP = 0.10). In a second step, three unsupervised feature selection strategies were evaluated. A grid parameter search is applied to optimize parameters for feature selection and clustering. Results show that using half of the regions (700 or 800) obtains the best performance for all three strategies. Increasing the number of clusters in clustering can also improve the retrieval performance. The SIFT descriptor variance in each case gave the best indication of saliency for the regions (MAP = 0.23), better than the other two strategies (MAP = 0.20 and 0.21). Combining GIFT (MAP = 0.23) and the best SIFT strategy (MAP = 0.23) produced significantly better results (MAP = 0.27) than each system alone. A case-based fracture retrieval engine was developed and is available for online demonstration. SIFT is used to extract local features, and three feature selection strategies were introduced and evaluated. A baseline using the GIFT system was used to evaluate the salient point-based approaches. Without supervised learning, SIFT-based systems with optimized parameters slightly outperformed the GIFT system. A fusion of the two approaches shows that the information contained in the two approaches is complementary. Supervised learning on the feature space is foreseen as the next step of this study.
Western Candor Chasma, Valles Marineris
NASA Technical Reports Server (NTRS)
1998-01-01
One of the most striking discoveries of the Mars Global Surveyor mission has been the identification of thousands of meters/feet of layers within the wall rock of the enormous martian canyon system, Valles Marineris.
Valles Marineris was first observed in 1972 by the Mariner 9 spacecraft, from which the troughs get their name: Valles--valleys, Marineris--Mariner.Some hints of layering in both the canyon walls and within some deposits on the canyon floors were seen in Mariner 9 and Viking orbiter images from the 1970s. The Mars Orbiter Camera on board Mars Global Surveyor has been examining these layers at much higher resolution than was available previously.MOC images led to the realization that there are layers in the walls that go down to great depths. An example of the wall rock layers can be seen in MOC image 8403, shown above (C).MOC images also reveal amazing layered outcrops on the floors of some of the Valles Marineris canyons. Particularly noteworthy is MOC image 23304 (D, above), which shows extensive, horizontally-bedded layers exposed in buttes and mesas on the floor of western Candor Chasma. These layered rocks might be the same material as is exposed in the chasm walls (as in 8403--C, above), or they might be rocks that formed by deposition (from water, wind, and/or volcanism) long after Candor Chasma opened up.In addition to layered materials in the walls and on the floors of the Valles Marineris system, MOC images are helping to refine our classification of geologic features that occur within the canyons. For example, MOC image 25205 (E, above), shows the southern tip of a massive, tongue-shaped massif (a mountainous ridge) that was previously identified as a layered deposit. However, this MOC image does not show layering. The material has been sculpted by wind and mass-wasting--downslope movement of debris--but no obvious layers were exposed by these processes.Valles Marineris a fascinating region on Mars that holds much potential to reveal information about the early history and evolution of the red planet. The MOC Science Team is continuing to examine the wealth of new data and planning for new Valles Marineris targets once the Mapping Phase of the Mars Global Surveyor mission commences in March 1999.This image: Layers in western Candor Chasma northern wall. MOC image 8403 subframe shown at full resolution of 4.6 meters (15 feet) per pixel. The image shows an area approximately 2.4 by 2.5 kilometers (1.5 x 1.6 miles). North is up, illumination is from the left. Image 8403 was obtained during Mars Global Surveyor's 84th orbit at 10:12 p.m. (PST) on January 6, 1998.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Image search engine with selective filtering and feature-element-based classification
NASA Astrophysics Data System (ADS)
Li, Qing; Zhang, Yujin; Dai, Shengyang
2001-12-01
With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.
Transient surface liquid in Titan's south polar region from Cassini
Hayes, A.G.; Aharonson, O.; Lunine, J.I.; Kirk, R.L.; Zebker, H.A.; Wye, L.C.; Lorenz, R.D.; Turtle, E.P.; Paillou, P.; Mitri, Giuseppe; Wall, S.D.; Stofan, E.R.; Mitchell, K.L.; Elachi, C.
2011-01-01
Cassini RADAR images of Titan's south polar region acquired during southern summer contain lake features which disappear between observations. These features show a tenfold increases in backscatter cross-section between images acquired one year apart, which is inconsistent with common scattering models without invoking temporal variability. The morphologic boundaries are transient, further supporting changes in lake level. These observations are consistent with the exposure of diffusely scattering lakebeds that were previously hidden by an attenuating liquid medium. We use a two-layer model to explain backscatter variations and estimate a drop in liquid depth of approximately 1-m-per-year. On larger scales, we observe shoreline recession between ISS and RADAR images of Ontario Lacus, the largest lake in Titan's south polar region. The recession, occurring between June 2005 and July 2009, is inversely proportional to slopes estimated from altimetric profiles and the exponential decay of near-shore backscatter, consistent with a uniform reduction of 4 ± 1.3 m in lake depth. Of the potential explanations for observed surface changes, we favor evaporation and infiltration. The disappearance of dark features and the recession of Ontario's shoreline represents volatile transport in an active methane-based hydrologic cycle. Observed loss rates are compared and shown to be consistent with available global circulation models. To date, no unambiguous changes in lake level have been observed between repeat images in the north polar region, although further investigation is warranted. These observations constrain volatile flux rates in Titan's hydrologic system and demonstrate that the surface plays an active role in its evolution. Constraining these seasonal changes represents the first step toward our understanding of longer climate cycles that may determine liquid distribution on Titan over orbital time periods.
NASA Astrophysics Data System (ADS)
Garciá-Arteaga, Juan D.; Corredor, Germán.; Wang, Xiangxue; Velcheti, Vamsidhar; Madabhushi, Anant; Romero, Eduardo
2017-11-01
Tumor-infiltrating lymphocytes occurs when various classes of white blood cells migrate from the blood stream towards the tumor, infiltrating it. The presence of TIL is predictive of the response of the patient to therapy. In this paper, we show how the automatic detection of lymphocytes in digital H and E histopathological images and the quantitative evaluation of the global lymphocyte configuration, evaluated through global features extracted from non-parametric graphs, constructed from the lymphocytes' detected positions, can be correlated to the patient's outcome in early-stage non-small cell lung cancer (NSCLC). The method was assessed on a tissue microarray cohort composed of 63 NSCLC cases. From the evaluated graphs, minimum spanning trees and K-nn showed the highest predictive ability, yielding F1 Scores of 0.75 and 0.72 and accuracies of 0.67 and 0.69, respectively. The predictive power of the proposed methodology indicates that graphs may be used to develop objective measures of the infiltration grade of tumors, which can, in turn, be used by pathologists to improve the decision making and treatment planning processes.
Localized Chemical Remodeling for Live Cell Imaging of Protein-Specific Glycoform.
Hui, Jingjing; Bao, Lei; Li, Siqiao; Zhang, Yi; Feng, Yimei; Ding, Lin; Ju, Huangxian
2017-07-03
Live cell imaging of protein-specific glycoforms is important for the elucidation of glycosylation mechanisms and identification of disease states. The currently used metabolic oligosaccharide engineering (MOE) technology permits routinely global chemical remodeling (GCM) for carbohydrate site of interest, but can exert unnecessary whole-cell scale perturbation and generate unpredictable metabolic efficiency issue. A localized chemical remodeling (LCM) strategy for efficient and reliable access to protein-specific glycoform information is reported. The proof-of-concept protocol developed for MUC1-specific terminal galactose/N-acetylgalactosamine (Gal/GalNAc) combines affinity binding, off-on switchable catalytic activity, and proximity catalysis to create a reactive handle for bioorthogonal labeling and imaging. Noteworthy assay features associated with LCM as compared with MOE include minimum target cell perturbation, short reaction timeframe, effectiveness as a molecular ruler, and quantitative analysis capability. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.