Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Multi-Object Spectroscopy with MUSE
NASA Astrophysics Data System (ADS)
Kelz, A.; Kamann, S.; Urrutia, T.; Weilbacher, P.; Bacon, R.
2016-10-01
Since 2014, MUSE, the Multi-Unit Spectroscopic Explorer, is in operation at the ESO-VLT. It combines a superb spatial sampling with a large wavelength coverage. By design, MUSE is an integral-field instrument, but its field-of-view and large multiplex make it a powerful tool for multi-object spectroscopy too. Every data-cube consists of 90,000 image-sliced spectra and 3700 monochromatic images. In autumn 2014, the observing programs with MUSE have commenced, with targets ranging from distant galaxies in the Hubble Deep Field to local stellar populations, star formation regions and globular clusters. This paper provides a brief summary of the key features of the MUSE instrument and its complex data reduction software. Some selected examples are given, how multi-object spectroscopy for hundreds of continuum and emission-line objects can be obtained in wide, deep and crowded fields with MUSE, without the classical need for any target pre-selection.
NASA Astrophysics Data System (ADS)
Liu, Tao; Abd-Elrahman, Amr
2018-05-01
Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.
NASA Astrophysics Data System (ADS)
Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan
2018-07-01
Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.
HCP: A Flexible CNN Framework for Multi-label Image Classification.
Wei, Yunchao; Xia, Wei; Lin, Min; Huang, Junshi; Ni, Bingbing; Dong, Jian; Zhao, Yao; Yan, Shuicheng
2015-10-26
Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and/or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5% by HCP only and 93.2% after the fusion with our complementary result in [44] based on hand-crafted features on the VOC 2012 dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsieh, Bau-Ching; Wang, Wei-Hao; Hsieh, Chih-Chiang
2012-12-15
We present ultra-deep J and K{sub S} imaging observations covering a 30' Multiplication-Sign 30' area of the Extended Chandra Deep Field-South (ECDFS) carried out by our Taiwan ECDFS Near-Infrared Survey (TENIS). The median 5{sigma} limiting magnitudes for all detected objects in the ECDFS reach 24.5 and 23.9 mag (AB) for J and K{sub S} , respectively. In the inner 400 arcmin{sup 2} region where the sensitivity is more uniform, objects as faint as 25.6 and 25.0 mag are detected at 5{sigma}. Thus, this is by far the deepest J and K{sub S} data sets available for the ECDFS. To combinemore » TENIS with the Spitzer IRAC data for obtaining better spectral energy distributions of high-redshift objects, we developed a novel deconvolution technique (IRACLEAN) to accurately estimate the IRAC fluxes. IRACLEAN can minimize the effect of blending in the IRAC images caused by the large point-spread functions and reduce the confusion noise. We applied IRACLEAN to the images from the Spitzer IRAC/MUSYC Public Legacy in the ECDFS survey (SIMPLE) and generated a J+K{sub S} -selected multi-wavelength catalog including the photometry of both the TENIS near-infrared and the SIMPLE IRAC data. We publicly release the data products derived from this work, including the J and K{sub S} images and the J+K{sub S} -selected multi-wavelength catalog.« less
Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification
NASA Astrophysics Data System (ADS)
Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.
2018-04-01
In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.
Cloud Detection by Fusing Multi-Scale Convolutional Features
NASA Astrophysics Data System (ADS)
Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang
2018-04-01
Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.
Generating description with multi-feature fusion and saliency maps of image
NASA Astrophysics Data System (ADS)
Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo
2018-04-01
Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
SPLASH-SXDF Multi-wavelength Photometric Catalog
NASA Astrophysics Data System (ADS)
Mehta, Vihang; Scarlata, Claudia; Capak, Peter; Davidzon, Iary; Faisst, Andreas; Hsieh, Bau Ching; Ilbert, Olivier; Jarvis, Matt; Laigle, Clotilde; Phillips, John; Silverman, John; Strauss, Michael A.; Tanaka, Masayuki; Bowler, Rebecca; Coupon, Jean; Foucaud, Sébastien; Hemmati, Shoubaneh; Masters, Daniel; McCracken, Henry Joy; Mobasher, Bahram; Ouchi, Masami; Shibuya, Takatoshi; Wang, Wei-Hao
2018-04-01
We present a multi-wavelength catalog in the Subaru/XMM-Newton Deep Field (SXDF) as part of the Spitzer Large Area Survey with Hyper-Suprime-Cam (SPLASH). We include the newly acquired optical data from the Hyper-Suprime-Cam Subaru Strategic Program, accompanied by IRAC coverage from the SPLASH survey. All available optical and near-infrared data is homogenized and resampled on a common astrometric reference frame. Source detection is done using a multi-wavelength detection image including the u-band to recover the bluest objects. We measure multi-wavelength photometry and compute photometric redshifts as well as physical properties for ∼1.17 million objects over ∼4.2 deg2, with ∼800,000 objects in the 2.4 deg2 HSC-Ultra-Deep coverage. Using the available spectroscopic redshifts from various surveys over the range of 0 < z < 6, we verify the performance of the photometric redshifts and we find a normalized median absolute deviation of 0.023 and outlier fraction of 3.2%. The SPLASH-SXDF catalog is a valuable, publicly available resource, perfectly suited for studying galaxies in the early universe and tracing their evolution through cosmic time.
A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.
Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous
2017-08-30
While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan
2017-11-01
Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.
NASA Astrophysics Data System (ADS)
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.
Deep Hashing for Scalable Image Search.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2017-05-01
In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
Deep Learning for Lowtextured Image Matching
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.
2018-05-01
Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.
BATMAN flies: a compact spectro-imager for space observation
NASA Astrophysics Data System (ADS)
Zamkotsian, Frederic; Ilbert, Olivier; Zoubian, Julien; Delsanti, Audrey; Boissier, Samuel; Lancon, Ariane
2014-08-01
BATMAN flies is a compact spectro-imager based on MOEMS for generating reconfigurable slit masks, and feeding two arms in parallel. The FOV is 25 x 12 arcmin2 for a 1m telescope, in infrared (0.85-1.7μm) and 500-1000 spectral resolution. Unique science cases for Space Observation are reachable with this deep spectroscopic multi-survey instrument: deep survey of high-z galaxies down to H=25 on 5 deg2 with continuum detection and all z>7 candidates at H=26.2 over 5 deg2; deep survey of young stellar clusters in nearby galaxies; deep survey of the Kuiper Belt of ALL known objects down to H=22. Pathfinder towards BATMAN in space is already running with ground-based demonstrators.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J
2017-03-03
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.
2017-01-01
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
S-CNN: Subcategory-aware convolutional networks for object detection.
Chen, Tao; Lu, Shijian; Fan, Jiayuan
2017-09-26
The marriage between the deep convolutional neural network (CNN) and region proposals has made breakthroughs for object detection in recent years. While the discriminative object features are learned via a deep CNN for classification, the large intra-class variation and deformation still limit the performance of the CNN based object detection. We propose a subcategory-aware CNN (S-CNN) to solve the object intra-class variation problem. In the proposed technique, the training samples are first grouped into multiple subcategories automatically through a novel instance sharing maximum margin clustering process. A multi-component Aggregated Channel Feature (ACF) detector is then trained to produce more latent training samples, where each ACF component corresponds to one clustered subcategory. The produced latent samples together with their subcategory labels are further fed into a CNN classifier to filter out false proposals for object detection. An iterative learning algorithm is designed for the joint optimization of image subcategorization, multi-component ACF detector, and subcategory-aware CNN classifier. Experiments on INRIA Person dataset, Pascal VOC 2007 dataset and MS COCO dataset show that the proposed technique clearly outperforms the state-of-the-art methods for generic object detection.
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
Deep multi-spectral ensemble learning for electronic cleansing in dual-energy CT colonography
NASA Astrophysics Data System (ADS)
Tachibana, Rie; Näppi, Janne J.; Hironaka, Toru; Kim, Se Hyung; Yoshida, Hiroyuki
2017-03-01
We developed a novel electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC) based on an ensemble deep convolution neural network (DCNN) and multi-spectral multi-slice image patches. In the method, an ensemble DCNN is used to classify each voxel of a DE-CTC image volume into five classes: luminal air, soft tissue, tagged fecal materials, and partial-volume boundaries between air and tagging and those between soft tissue and tagging. Each DCNN acts as a voxel classifier, where an input image patch centered at the voxel is generated as input to the DCNNs. An image patch has three channels that are mapped from a region-of-interest containing the image plane of the voxel and the two adjacent image planes. Six different types of spectral input image datasets were derived using two dual-energy CT images, two virtual monochromatic images, and two material images. An ensemble DCNN was constructed by use of a meta-classifier that combines the output of multiple DCNNs, each of which was trained with a different type of multi-spectral image patches. The electronically cleansed CTC images were calculated by removal of regions classified as other than soft tissue, followed by a colon surface reconstruction. For pilot evaluation, 359 volumes of interest (VOIs) representing sources of subtraction artifacts observed in current EC schemes were sampled from 30 clinical CTC cases. Preliminary results showed that the ensemble DCNN can yield high accuracy in labeling of the VOIs, indicating that deep learning of multi-spectral EC with multi-slice imaging could accurately remove residual fecal materials from CTC images without generating major EC artifacts.
NASA Astrophysics Data System (ADS)
Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min
2018-03-01
In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.
Fu, Min; Wu, Wenming; Hong, Xiafei; Liu, Qiuhua; Jiang, Jialin; Ou, Yaobin; Zhao, Yupei; Gong, Xinqi
2018-04-24
Efficient computational recognition and segmentation of target organ from medical images are foundational in diagnosis and treatment, especially about pancreas cancer. In practice, the diversity in appearance of pancreas and organs in abdomen, makes detailed texture information of objects important in segmentation algorithm. According to our observations, however, the structures of previous networks, such as the Richer Feature Convolutional Network (RCF), are too coarse to segment the object (pancreas) accurately, especially the edge. In this paper, we extend the RCF, proposed to the field of edge detection, for the challenging pancreas segmentation, and put forward a novel pancreas segmentation network. By employing multi-layer up-sampling structure replacing the simple up-sampling operation in all stages, the proposed network fully considers the multi-scale detailed contexture information of object (pancreas) to perform per-pixel segmentation. Additionally, using the CT scans, we supply and train our network, thus get an effective pipeline. Working with our pipeline with multi-layer up-sampling model, we achieve better performance than RCF in the task of single object (pancreas) segmentation. Besides, combining with multi scale input, we achieve the 76.36% DSC (Dice Similarity Coefficient) value in testing data. The results of our experiments show that our advanced model works better than previous networks in our dataset. On the other words, it has better ability in catching detailed contexture information. Therefore, our new single object segmentation model has practical meaning in computational automatic diagnosis.
On the Multi-Modal Object Tracking and Image Fusion Using Unsupervised Deep Learning Methodologies
NASA Astrophysics Data System (ADS)
LaHaye, N.; Ott, J.; Garay, M. J.; El-Askary, H. M.; Linstead, E.
2017-12-01
The number of different modalities of remote-sensors has been on the rise, resulting in large datasets with different complexity levels. Such complex datasets can provide valuable information separately, yet there is a bigger value in having a comprehensive view of them combined. As such, hidden information can be deduced through applying data mining techniques on the fused data. The curse of dimensionality of such fused data, due to the potentially vast dimension space, hinders our ability to have deep understanding of them. This is because each dataset requires a user to have instrument-specific and dataset-specific knowledge for optimum and meaningful usage. Once a user decides to use multiple datasets together, deeper understanding of translating and combining these datasets in a correct and effective manner is needed. Although there exists data centric techniques, generic automated methodologies that can potentially solve this problem completely don't exist. Here we are developing a system that aims to gain a detailed understanding of different data modalities. Such system will provide an analysis environment that gives the user useful feedback and can aid in research tasks. In our current work, we show the initial outputs our system implementation that leverages unsupervised deep learning techniques so not to burden the user with the task of labeling input data, while still allowing for a detailed machine understanding of the data. Our goal is to be able to track objects, like cloud systems or aerosols, across different image-like data-modalities. The proposed system is flexible, scalable and robust to understand complex likenesses within multi-modal data in a similar spatio-temporal range, and also to be able to co-register and fuse these images when needed.
Multi-Atlas Based Segmentation of Brainstem Nuclei from MR Images by Deep Hyper-Graph Learning.
Dong, Pei; Guo, Yangrong; Gao, Yue; Liang, Peipeng; Shi, Yonghong; Wang, Qian; Shen, Dinggang; Wu, Guorong
2016-10-01
Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinson's disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First , we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second , besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third , since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.
Kandukuri, Jayanth; Yu, Shuai; Cheng, Bingbing; Bandi, Venugopal; D’Souza, Francis; Nguyen, Kytai T.; Hong, Yi; Yuan, Baohong
2017-01-01
Simultaneous imaging of multiple targets (SIMT) in opaque biological tissues is an important goal for molecular imaging in the future. Multi-color fluorescence imaging in deep tissues is a promising technology to reach this goal. In this work, we developed a dual-modality imaging system by combining our recently developed ultrasound-switchable fluorescence (USF) imaging technology with the conventional ultrasound (US) B-mode imaging. This dual-modality system can simultaneously image tissue acoustic structure information and multi-color fluorophores in centimeter-deep tissue with comparable spatial resolutions. To conduct USF imaging on the same plane (i.e., x-z plane) as US imaging, we adopted two 90°-crossed ultrasound transducers with an overlapped focal region, while the US transducer (the third one) was positioned at the center of these two USF transducers. Thus, the axial resolution of USF is close to the lateral resolution, which allows a point-by-point USF scanning on the same plane as the US imaging. Both multi-color USF and ultrasound imaging of a tissue phantom were demonstrated. PMID:28165390
NASA Astrophysics Data System (ADS)
Niculescu, S.; Ienco, D.; Hanganu, J.
2018-04-01
Land cover is a fundamental variable for regional planning, as well as for the study and understanding of the environment. This work propose a multi-temporal approach relying on a fusion of radar multi-sensor data and information collected by the latest sensor (Sentinel-1) with a view to obtaining better results than traditional image processing techniques. The Danube Delta is the site for this work. The spatial approach relies on new spatial analysis technologies and methodologies: Deep Learning of multi-temporal Sentinel-1. We propose a deep learning network for image classification which exploits the multi-temporal characteristic of Sentinel-1 data. The model we employ is a Gated Recurrent Unit (GRU) Network, a recurrent neural network that explicitly takes into account the time dimension via a gated mechanism to perform the final prediction. The main quality of the GRU network is its ability to consider only the important part of the information coming from the temporal data discarding the irrelevant information via a forgetting mechanism. We propose to use such network structure to classify a series of images Sentinel-1 (20 Sentinel-1 images acquired between 9.10.2014 and 01.04.2016). The results are compared with results of the classification of Random Forest.
Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion
NASA Astrophysics Data System (ADS)
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei
2017-02-01
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
Bae, Seung-Hwan; Yoon, Kuk-Jin
2018-03-01
Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.
Data Mining Research with the LSST
NASA Astrophysics Data System (ADS)
Borne, Kirk D.; Strauss, M. A.; Tyson, J. A.
2007-12-01
The LSST catalog database will exceed 10 petabytes, comprising several hundred attributes for 5 billion galaxies, 10 billion stars, and over 1 billion variable sources (optical variables, transients, or moving objects), extracted from over 20,000 square degrees of deep imaging in 5 passbands with thorough time domain coverage: 1000 visits over the 10-year LSST survey lifetime. The opportunities are enormous for novel scientific discoveries within this rich time-domain ultra-deep multi-band survey database. Data Mining, Machine Learning, and Knowledge Discovery research opportunities with the LSST are now under study, with a potential for new collaborations to develop to contribute to these investigations. We will describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. We also give some illustrative examples of current scientific data mining research in astronomy, and point out where new research is needed. In particular, the data mining research community will need to address several issues in the coming years as we prepare for the LSST data deluge. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; visual data mining algorithms for visual exploration of the data; indexing of multi-attribute multi-dimensional astronomical databases (beyond RA-Dec spatial indexing) for rapid querying of petabyte databases; and more. Finally, we will identify opportunities for synergistic collaboration between the data mining research group and the LSST Data Management and Science Collaboration teams.
Identification of autism spectrum disorder using deep learning and the ABIDE dataset.
Heinsfeld, Anibal Sólon; Franco, Alexandre Rosa; Craddock, R Cameron; Buchweitz, Augusto; Meneguzzi, Felipe
2018-01-01
The goal of the present study was to apply deep learning algorithms to identify autism spectrum disorder (ASD) patients from large brain imaging dataset, based solely on the patients brain activation patterns. We investigated ASD patients brain imaging data from a world-wide multi-site database known as ABIDE (Autism Brain Imaging Data Exchange). ASD is a brain-based disorder characterized by social deficits and repetitive behaviors. According to recent Centers for Disease Control data, ASD affects one in 68 children in the United States. We investigated patterns of functional connectivity that objectively identify ASD participants from functional brain imaging data, and attempted to unveil the neural patterns that emerged from the classification. The results improved the state-of-the-art by achieving 70% accuracy in identification of ASD versus control patients in the dataset. The patterns that emerged from the classification show an anticorrelation of brain function between anterior and posterior areas of the brain; the anticorrelation corroborates current empirical evidence of anterior-posterior disruption in brain connectivity in ASD. We present the results and identify the areas of the brain that contributed most to differentiating ASD from typically developing controls as per our deep learning model.
The Great Observatories Origins Deep Survey (GOODS): Overview and Status
NASA Astrophysics Data System (ADS)
Hook, R. N.; GOODS Team
2002-12-01
GOODS is a very large project to gather deep imaging data and spectroscopic followup of two fields, the Hubble Deep Field North (HDF-N) and the Chandra Deep Field South (CDF-S), with both space and ground-based instruments to create an extensive multiwavelength public data set for community research on the distant Universe. GOODS includes a SIRTF Legacy Program (PI: Mark Dickinson) and a Hubble Treasury Program of ACS imaging (PI: Mauro Giavalisco). The ACS imaging was also optimized for the detection of high-z supernovae which are being followed up by a further target of opportunity Hubble GO Program (PI: Adam Riess). The bulk of the CDF-S ground-based data presently available comes from an ESO Large Programme (PI: Catherine Cesarsky) which includes both deep imaging and multi-object followup spectroscopy. This is currently complemented in the South by additional CTIO imaging. Currently available HDF-N ground-based data forming part of GOODS includes NOAO imaging. Although the SIRTF part of the survey will not begin until later in the year the ACS imaging is well advanced and there is also a huge body of complementary ground-based imaging and some follow-up spectroscopy which is already publicly available. We summarize the current status of GOODS and give an overview of the data products currently available and present the timescales for the future. Many early science results from the survey are presented in other GOODS papers at this meeting. Support for the HST GOODS program presented here and in companion abstracts was provided by NASA thorugh grant number GO-9425 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.
Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos
2015-01-01
Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.
COMPACT NON-CONTACT TOTAL EMISSION DETECTION FOR IN-VIVO MULTI-PHOTON EXCITATION MICROSCOPY
Glancy, Brian; Karamzadeh, Nader S.; Gandjbakhche, Amir H.; Redford, Glen; Kilborn, Karl; Knutson, Jay R.; Balaban, Robert S.
2014-01-01
Summary We describe a compact, non-contact design for a Total Emission Detection (c-TED) system for intra-vital multi-photon imaging. To conform to a standard upright two-photon microscope design, this system uses a parabolic mirror surrounding a standard microscope objective in concert with an optical path that does not interfere with normal microscope operation. The non-contact design of this device allows for maximal light collection without disrupting the physiology of the specimen being examined. Tests were conducted on exposed tissues in live animals to examine the emission collection enhancement of the c-TED device compared to heavily optimized objective-based emission collection. The best light collection enhancement was seen from murine fat (5×-2× gains as a function of depth), while murine skeletal muscle and rat kidney showed gains of over two and just under two-fold near the surface, respectively. Gains decreased with imaging depth (particularly in the kidney). Zebrafish imaging on a reflective substrate showed close to a two-fold gain throughout the entire volume of an intact embryo (approximately 150 μm deep). Direct measurement of bleaching rates confirmed that the lower laser powers (enabled by greater light collection efficiency) yielded reduced photobleaching in vivo. The potential benefits of increased light collection in terms of speed of imaging and reduced photo-damage, as well as the applicability of this device to other multi-photon imaging methods is discussed. PMID:24251437
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
Zeng, Ling-Li; Wang, Huaning; Hu, Panpan; Yang, Bo; Pu, Weidan; Shen, Hui; Chen, Xingui; Liu, Zhening; Yin, Hong; Tan, Qingrong; Wang, Kai; Hu, Dewen
2018-04-01
A lack of a sufficiently large sample at single sites causes poor generalizability in automatic diagnosis classification of heterogeneous psychiatric disorders such as schizophrenia based on brain imaging scans. Advanced deep learning methods may be capable of learning subtle hidden patterns from high dimensional imaging data, overcome potential site-related variation, and achieve reproducible cross-site classification. However, deep learning-based cross-site transfer classification, despite less imaging site-specificity and more generalizability of diagnostic models, has not been investigated in schizophrenia. A large multi-site functional MRI sample (n = 734, including 357 schizophrenic patients from seven imaging resources) was collected, and a deep discriminant autoencoder network, aimed at learning imaging site-shared functional connectivity features, was developed to discriminate schizophrenic individuals from healthy controls. Accuracies of approximately 85·0% and 81·0% were obtained in multi-site pooling classification and leave-site-out transfer classification, respectively. The learned functional connectivity features revealed dysregulation of the cortical-striatal-cerebellar circuit in schizophrenia, and the most discriminating functional connections were primarily located within and across the default, salience, and control networks. The findings imply that dysfunctional integration of the cortical-striatal-cerebellar circuit across the default, salience, and control networks may play an important role in the "disconnectivity" model underlying the pathophysiology of schizophrenia. The proposed discriminant deep learning method may be capable of learning reliable connectome patterns and help in understanding the pathophysiology and achieving accurate prediction of schizophrenia across multiple independent imaging sites. Copyright © 2018 German Center for Neurodegenerative Diseases (DZNE). Published by Elsevier B.V. All rights reserved.
Accurate segmentation of lung fields on chest radiographs using deep convolutional networks
NASA Astrophysics Data System (ADS)
Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory
2017-02-01
Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.
Sensing Urban Land-Use Patterns by Integrating Google Tensorflow and Scene-Classification Models
NASA Astrophysics Data System (ADS)
Yao, Y.; Liang, H.; Li, X.; Zhang, J.; He, J.
2017-09-01
With the rapid progress of China's urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN) library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model's ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI). Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs). To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP) method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737) for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.
Zhong, Jiandan; Lei, Tao; Yao, Guangle
2017-11-24
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks
Zhong, Jiandan; Lei, Tao; Yao, Guangle
2017-01-01
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed. PMID:29186756
Dynamics, Chemical Abundances, and ages of Globular Clusters in the Virgo Cluster of Galaxies
NASA Astrophysics Data System (ADS)
Guhathakurta, Puragra; NGVS Collaboration
2018-01-01
We present a study of the dynamics, metallicities, and ages of globular clusters (GCs) in the Next Generation Virgo cluster Survey (NGVS), a deep, multi-band (u, g, r, i, z, and Ks), wide-field (104 deg2) imaging survey carried out using the 3.6-m Canada-France-Hawaii Telescope and MegaCam imager. GC candidates were selected from the NGVS survey using photometric and image morphology criteria and these were followed up with deep, medium-resolution, multi-object spectroscopy using the Keck II 10-m telescope and DEIMOS spectrograph. The primary spectroscopic targets were candidate GC satellites of dwarf elliptical (dE) and ultra-diffuse galaxies (UDGs) in the Virgo cluster. While many objects were confirmed as GC satellites of Virgo dEs and UDGs, many turned out to be non-satellites based on their radial velocity and/or positional mismatch any identifiable Virgo cluster galaxy. We have used a combination of spectral characteristics (e.g., presence of absorption vs. emission lines), new Gaussian mixture modeling of radial velocity and sky position data, and a new extreme deconvolution analysis of ugrizKs photometry and image morphology, to classify all the objects in our sample into: (1) GC satellites of dE galaxies, (2) GC satellites of UDGs, (3) intra-cluster GCs (ICGCs) in the Virgo cluster, (4) GCs in the outer halo of the central cluster galaxy M87, (5) foreground Milky Way stars, and (6) distant background galaxies. We use these data to study the dynamics and dark matter content of dE and UDGs in the Virgo cluster, place important constraints on the nature of dE nuclei, and study the origin of ICGCs versus GCs in the remote M87 halo.We are grateful for financial support from the NSF and NASA/STScI.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
[Medical computer-aided detection method based on deep learning].
Tao, Pan; Fu, Zhongliang; Zhu, Kai; Wang, Lili
2018-03-01
This paper performs a comprehensive study on the computer-aided detection for the medical diagnosis with deep learning. Based on the region convolution neural network and the prior knowledge of target, this algorithm uses the region proposal network, the region of interest pooling strategy, introduces the multi-task loss function: classification loss, bounding box localization loss and object rotation loss, and optimizes it by end-to-end. For medical image it locates the target automatically, and provides the localization result for the next stage task of segmentation. For the detection of left ventricular in echocardiography, proposed additional landmarks such as mitral annulus, endocardial pad and apical position, were used to estimate the left ventricular posture effectively. In order to verify the robustness and effectiveness of the algorithm, the experimental data of ultrasonic and nuclear magnetic resonance images are selected. Experimental results show that the algorithm is fast, accurate and effective.
Ship detection leveraging deep neural networks in WorldView-2 images
NASA Astrophysics Data System (ADS)
Yamamoto, T.; Kazama, Y.
2017-10-01
Interpretation of high-resolution satellite images has been so difficult that skilled interpreters must have checked the satellite images manually because of the following issues. One is the requirement of the high detection accuracy rate. The other is the variety of the target, taking ships for example, there are many kinds of ships, such as boat, cruise ship, cargo ship, aircraft carrier, and so on. Furthermore, there are similar appearance objects throughout the image; therefore, it is often difficult even for the skilled interpreters to distinguish what object the pixels really compose. In this paper, we explore the feasibility of object extraction leveraging deep learning with high-resolution satellite images, especially focusing on ship detection. We calculated the detection accuracy using the WorldView-2 images. First, we collected the training images labelled as "ship" and "not ship". After preparing the training data, we defined the deep neural network model to judge whether ships are existing or not, and trained them with about 50,000 training images for each label. Subsequently, we scanned the evaluation image with different resolution windows and extracted the "ship" images. Experimental result shows the effectiveness of the deep learning based object detection.
Fully Convolutional Neural Networks Improve Abdominal Organ Segmentation.
Bobo, Meg F; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G; Hilmes, Melissa A; Landman, Bennett A
2018-03-01
Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities.
Fully convolutional neural networks improve abdominal organ segmentation
NASA Astrophysics Data System (ADS)
Bobo, Meg F.; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J.; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G.; Hilmes, Melissa A.; Landman, Bennett A.
2018-03-01
Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1
A deep learning approach for pose estimation from volumetric OCT data.
Gessert, Nils; Schlüter, Matthias; Schlaefer, Alexander
2018-05-01
Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Detecting Water Bodies in LANDSAT8 Oli Image Using Deep Learning
NASA Astrophysics Data System (ADS)
Jiang, W.; He, G.; Long, T.; Ni, Y.
2018-04-01
Water body identifying is critical to climate change, water resources, ecosystem service and hydrological cycle. Multi-layer perceptron(MLP) is the popular and classic method under deep learning framework to detect target and classify image. Therefore, this study adopts this method to identify the water body of Landsat8. To compare the performance of classification, the maximum likelihood and water index are employed for each study area. The classification results are evaluated from accuracy indices and local comparison. Evaluation result shows that multi-layer perceptron(MLP) can achieve better performance than the other two methods. Moreover, the thin water also can be clearly identified by the multi-layer perceptron. The proposed method has the application potential in mapping global scale surface water with multi-source medium-high resolution satellite data.
Autofocusing in digital holography using deep learning
NASA Astrophysics Data System (ADS)
Ren, Zhenbo; Xu, Zhimin; Lam, Edmund Y.
2018-02-01
In digital holography, it is critical to know the distance in order to reconstruct the multi-sectional object. This autofocusing is traditionally solved by reconstructing a stack of in-focus and out-of-focus images and using some focus metric, such as entropy or variance, to calculate the sharpness of each reconstructed image. Then the distance corresponding to the sharpest image is determined as the focal position. This method is effective but computationally demanding and time-consuming. To get an accurate estimation, one has to reconstruct many images. Sometimes after a coarse search, a refinement is needed. To overcome this problem in autofocusing, we propose to use deep learning, i.e., a convolutional neural network (CNN), to solve this problem. Autofocusing is viewed as a classification problem, in which the true distance is transferred as a label. To estimate the distance is equated to labeling a hologram correctly. To train such an algorithm, totally 1000 holograms are captured under the same environment, i.e., exposure time, incident angle, object, except the distance. There are 5 labels corresponding to 5 distances. These data are randomly split into three datasets to train, validate and test a CNN network. Experimental results show that the trained network is capable of predicting the distance without reconstructing or knowing any physical parameters about the setup. The prediction time using this method is far less than traditional autofocusing methods.
Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...
2016-02-09
In this paper, we present the results of approximately three years of observations of Planck Sunyaev-Zeldovich (SZ) sources with telescopes at the Canary Islands observatories as part of the general optical follow-up programme undertaken by the Planck Collaboration. In total, 78 SZ sources are discussed. Deep-imaging observations were obtained for most of these sources; spectroscopic observations in either in long-slit or multi-object modes were obtained for many. We effectively used 37.5 clear nights. We found optical counterparts for 73 of the 78 candidates. This sample includes 53 spectroscopic redshift determinations, 20 of them obtained with a multi-object spectroscopic mode. Finally,more » the sample contains new redshifts for 27 Planck clusters that were not included in the first Planck SZ source catalogue (PSZ1).« less
Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation
Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang
2015-01-01
The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
Choi, Joon Yul; Yoo, Tae Keun; Seo, Jeong Gi; Kwak, Jiyong; Um, Terry Taewoong; Rim, Tyler Hyungtaek
2017-01-01
Deep learning emerges as a powerful tool for analyzing medical images. Retinal disease detection by using computer-aided diagnosis from fundus image has emerged as a new method. We applied deep learning convolutional neural network by using MatConvNet for an automated detection of multiple retinal diseases with fundus photographs involved in STructured Analysis of the REtina (STARE) database. Dataset was built by expanding data on 10 categories, including normal retina and nine retinal diseases. The optimal outcomes were acquired by using a random forest transfer learning based on VGG-19 architecture. The classification results depended greatly on the number of categories. As the number of categories increased, the performance of deep learning models was diminished. When all 10 categories were included, we obtained results with an accuracy of 30.5%, relative classifier information (RCI) of 0.052, and Cohen's kappa of 0.224. Considering three integrated normal, background diabetic retinopathy, and dry age-related macular degeneration, the multi-categorical classifier showed accuracy of 72.8%, 0.283 RCI, and 0.577 kappa. In addition, several ensemble classifiers enhanced the multi-categorical classification performance. The transfer learning incorporated with ensemble classifier of clustering and voting approach presented the best performance with accuracy of 36.7%, 0.053 RCI, and 0.225 kappa in the 10 retinal diseases classification problem. First, due to the small size of datasets, the deep learning techniques in this study were ineffective to be applied in clinics where numerous patients suffering from various types of retinal disorders visit for diagnosis and treatment. Second, we found that the transfer learning incorporated with ensemble classifiers can improve the classification performance in order to detect multi-categorical retinal diseases. Further studies should confirm the effectiveness of algorithms with large datasets obtained from hospitals.
NASA Astrophysics Data System (ADS)
Amit, Guy; Ben-Ari, Rami; Hadad, Omer; Monovich, Einat; Granot, Noa; Hashoul, Sharbell
2017-03-01
Diagnostic interpretation of breast MRI studies requires meticulous work and a high level of expertise. Computerized algorithms can assist radiologists by automatically characterizing the detected lesions. Deep learning approaches have shown promising results in natural image classification, but their applicability to medical imaging is limited by the shortage of large annotated training sets. In this work, we address automatic classification of breast MRI lesions using two different deep learning approaches. We propose a novel image representation for dynamic contrast enhanced (DCE) breast MRI lesions, which combines the morphological and kinetics information in a single multi-channel image. We compare two classification approaches for discriminating between benign and malignant lesions: training a designated convolutional neural network and using a pre-trained deep network to extract features for a shallow classifier. The domain-specific trained network provided higher classification accuracy, compared to the pre-trained model, with an area under the ROC curve of 0.91 versus 0.81, and an accuracy of 0.83 versus 0.71. Similar accuracy was achieved in classifying benign lesions, malignant lesions, and normal tissue images. The trained network was able to improve accuracy by using the multi-channel image representation, and was more robust to reductions in the size of the training set. A small-size convolutional neural network can learn to accurately classify findings in medical images using only a few hundred images from a few dozen patients. With sufficient data augmentation, such a network can be trained to outperform a pre-trained out-of-domain classifier. Developing domain-specific deep-learning models for medical imaging can facilitate technological advancements in computer-aided diagnosis.
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Detrick, R. S.; Clark, D.; Gaylord, A.; Goldsmith, R.; Helly, J.; Lemmond, P.; Lerner, S.; Maffei, A.; Miller, S. P.; Norton, C.; Walden, B.
2005-12-01
The Scripps Institution of Oceanography (SIO) and the Woods Hole Oceanographic Institution (WHOI) have joined forces with the San Diego Supercomputer Center to build a testbed for multi-institutional archiving of shipboard and deep submergence vehicle data. Support has been provided by the Digital Archiving and Preservation program funded by NSF/CISE and the Library of Congress. In addition to the more than 92,000 objects stored in the SIOExplorer Digital Library, the testbed will provide access to data, photographs, video images and documents from WHOI ships, Alvin submersible and Jason ROV dives, and deep-towed vehicle surveys. An interactive digital library interface will allow combinations of distributed collections to be browsed, metadata inspected, and objects displayed or selected for download. The digital library architecture, and the search and display tools of the SIOExplorer project, are being combined with WHOI tools, such as the Alvin Framegrabber and the Jason Virtual Control Van, that have been designed using WHOI's GeoBrowser to handle the vast volumes of digital video and camera data generated by Alvin, Jason and other deep submergence vehicles. Notions of scalability will be tested, as data volumes range from 3 CDs per cruise to 200 DVDs per cruise. Much of the scalability of this proposal comes from an ability to attach digital library data and metadata acquisition processes to diverse sensor systems. We are able to run an entire digital library from a laptop computer as well as from supercomputer-center-size resources. It can be used, in the field, laboratory or classroom, covering data from acquisition-to-archive using a single coherent methodology. The design is an open architecture, supporting applications through well-defined external interfaces maintained as an open-source effort for community inclusion and enhancement.
Deep learning-based artificial vision for grasp classification in myoelectric hands
NASA Astrophysics Data System (ADS)
Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush
2017-06-01
Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at {{5}\\circ} intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached 85 % for the seen and 75 % for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of 84 % in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88 % . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
A Wavelet Polarization Decomposition Net Model for Polarimetric SAR Image Classification
NASA Astrophysics Data System (ADS)
He, Chu; Ou, Dan; Yang, Teng; Wu, Kun; Liao, Mingsheng; Chen, Erxue
2014-11-01
In this paper, a deep model based on wavelet texture has been proposed for Polarimetric Synthetic Aperture Radar (PolSAR) image classification inspired by recent successful deep learning method. Our model is supposed to learn powerful and informative representations to improve the generalization ability for the complex scene classification tasks. Given the influence of speckle noise in Polarimetric SAR image, wavelet polarization decomposition is applied first to obtain basic and discriminative texture features which are then embedded into a Deep Neural Network (DNN) in order to compose multi-layer higher representations. We demonstrate that the model can produce a powerful representation which can capture some untraceable information from Polarimetric SAR images and show a promising achievement in comparison with other traditional SAR image classification methods for the SAR image dataset.
NASA Astrophysics Data System (ADS)
Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin
2016-09-01
Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.
Dual deep modeling: multi-level modeling with dual potencies and its formalization in F-Logic.
Neumayr, Bernd; Schuetz, Christoph G; Jeusfeld, Manfred A; Schrefl, Michael
2018-01-01
An enterprise database contains a global, integrated, and consistent representation of a company's data. Multi-level modeling facilitates the definition and maintenance of such an integrated conceptual data model in a dynamic environment of changing data requirements of diverse applications. Multi-level models transcend the traditional separation of class and object with clabjects as the central modeling primitive, which allows for a more flexible and natural representation of many real-world use cases. In deep instantiation, the number of instantiation levels of a clabject or property is indicated by a single potency. Dual deep modeling (DDM) differentiates between source potency and target potency of a property or association and supports the flexible instantiation and refinement of the property by statements connecting clabjects at different modeling levels. DDM comes with multiple generalization of clabjects, subsetting/specialization of properties, and multi-level cardinality constraints. Examples are presented using a UML-style notation for DDM together with UML class and object diagrams for the representation of two-level user views derived from the multi-level model. Syntax and semantics of DDM are formalized and implemented in F-Logic, supporting the modeler with integrity checks and rich query facilities.
Deep learning-based artificial vision for grasp classification in myoelectric hands.
Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush
2017-06-01
Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at [Formula: see text] intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. The classification accuracy in the offline tests reached [Formula: see text] for the seen and [Formula: see text] for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of [Formula: see text] in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb Ultra TM prosthetic hand and a motion control TM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to [Formula: see text]. In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
VizieR Online Data Catalog: Improved multi-band photometry from SERVS (Nyland+, 2017)
NASA Astrophysics Data System (ADS)
Nyland, K.; Lacy, M.; Sajina, A.; Pforr, J.; Farrah, D.; Wilson, G.; Surace, J.; Haussler, B.; Vaccari, M.; Jarvis, M.
2017-07-01
The Spitzer Extragalactic Representative Volume Survey (SERVS) sky footprint includes five well-studied astronomical deep fields with abundant multi-wavelength data spanning an area of ~18deg2 and a co-moving volume of ~0.8Gpc3. The five deep fields included in SERVS are the XMM-LSS field, Lockman Hole (LH), ELAIS-N1 (EN1), ELAIS-S1 (ES1), and Chandra Deep Field South (CDFS). SERVS provides NIR, post-cryogenic imaging in the 3.6 and 4.5um Spitzer/IRAC bands to a depth of ~2uJy. IRAC dual-band source catalogs generated using traditional catalog extraction methods are described in Mauduit+ (2012PASP..124..714M). The Spitzer IRAC data are complemented by ground-based NIR observations from the VISTA Deep Extragalactic Observations (VIDEO; Jarvis+ 2013MNRAS.428.1281J) survey in the south in the Z, Y, J, H, and Ks bands and UKIRT Infrared Deep Sky Survey (UKIDSS; Lawrence+ 2007, see II/319) in the north in the J and K bands. SERVS also provides substantial overlap with infrared data from SWIRE (Lonsdale+ 2003PASP..115..897L) and the Herschel Multitiered Extragalactic Survey (HerMES; Oliver+ 2012, VIII/95). As shown in Figure 1, one square degree of the XMM-LSS field overlaps with ground-based optical data from the Canada-France-Hawaii Telescope Legacy Survey Deep field 1 (CFHTLS-D1). The CFHTLS-D1 region is centered at RAJ2000=02:25:59, DEJ2000=-04:29:40 and includes imaging through the filter set u', g', r', i', and z'. Thus, in combination with the NIR data from SERVS and VIDEO that overlap with the CFHTLS-D1 region, multi-band imaging over a total of 12 bands is available. (2 data files).
WFIRST: Science from Deep Field Surveys
NASA Astrophysics Data System (ADS)
Koekemoer, Anton M.; Foley, Ryan; WFIRST Deep Field Working Group
2018-06-01
WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.
WFIRST: Science from Deep Field Surveys
NASA Astrophysics Data System (ADS)
Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group
2018-01-01
WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.
Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization
NASA Astrophysics Data System (ADS)
Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li
2018-04-01
Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.
Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J
2017-08-01
Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.
Marginal Shape Deep Learning: Applications to Pediatric Lung Field Segmentation.
Mansoor, Awais; Cerrolaza, Juan J; Perez, Geovanny; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George
2017-02-11
Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM 1 (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.
Marginal shape deep learning: applications to pediatric lung field segmentation
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Cerrolaza, Juan J.; Perez, Geovany; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George
2017-02-01
Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, local- ization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0:927 using only the four highest modes of variation (compared to 0:888 with classical ASM1 (p-value=0:01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.
Marginal Shape Deep Learning: Applications to Pediatric Lung Field Segmentation
Mansoor, Awais; Cerrolaza, Juan J.; Perez, Geovanny; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George
2017-01-01
Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM1 (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects. PMID:28592911
NASA Technical Reports Server (NTRS)
Wissler, Steven S.; Maldague, Pierre; Rocca, Jennifer; Seybold, Calina
2006-01-01
The Deep Impact mission was ambitious and challenging. JPL's well proven, easily adaptable multi-mission sequence planning tools combined with integrated spacecraft subsystem models enabled a small operations team to develop, validate, and execute extremely complex sequence-based activities within very short development times. This paper focuses on the core planning tool used in the mission, APGEN. It shows how the multi-mission design and adaptability of APGEN made it possible to model spacecraft subsystems as well as ground assets throughout the lifecycle of the Deep Impact project, starting with models of initial, high-level mission objectives, and culminating in detailed predictions of spacecraft behavior during mission-critical activities.
NASA Astrophysics Data System (ADS)
Mabu, Shingo; Kido, Shoji; Hashimoto, Noriaki; Hirano, Yasushi; Kuremoto, Takashi
2018-02-01
This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
Wishart Deep Stacking Network for Fast POLSAR Image Classification.
Jiao, Licheng; Liu, Fang
2016-05-11
Inspired by the popular deep learning architecture - Deep Stacking Network (DSN), a specific deep model for polarimetric synthetic aperture radar (POLSAR) image classification is proposed in this paper, which is named as Wishart Deep Stacking Network (W-DSN). First of all, a fast implementation of Wishart distance is achieved by a special linear transformation, which speeds up the classification of POLSAR image and makes it possible to use this polarimetric information in the following Neural Network (NN). Then a single-hidden-layer neural network based on the fast Wishart distance is defined for POLSAR image classification, which is named as Wishart Network (WN) and improves the classification accuracy. Finally, a multi-layer neural network is formed by stacking WNs, which is in fact the proposed deep learning architecture W-DSN for POLSAR image classification and improves the classification accuracy further. In addition, the structure of WN can be expanded in a straightforward way by adding hidden units if necessary, as well as the structure of the W-DSN. As a preliminary exploration on formulating specific deep learning architecture for POLSAR image classification, the proposed methods may establish a simple but clever connection between POLSAR image interpretation and deep learning. The experiment results tested on real POLSAR image show that the fast implementation of Wishart distance is very efficient (a POLSAR image with 768000 pixels can be classified in 0.53s), and both the single-hidden-layer architecture WN and the deep learning architecture W-DSN for POLSAR image classification perform well and work efficiently.
Minimally invasive multimode optical fiber microendoscope for deep brain fluorescence imaging
Ohayon, Shay; Caravaca-Aguirre, Antonio; Piestun, Rafael; DiCarlo, James J.
2018-01-01
A major open challenge in neuroscience is the ability to measure and perturb neural activity in vivo from well defined neural sub-populations at cellular resolution anywhere in the brain. However, limitations posed by scattering and absorption prohibit non-invasive multi-photon approaches for deep (>2mm) structures, while gradient refractive index (GRIN) endoscopes are relatively thick and can cause significant damage upon insertion. Here, we present a novel micro-endoscope design to image neural activity at arbitrary depths via an ultra-thin multi-mode optical fiber (MMF) probe that has 5–10X thinner diameter than commercially available micro-endoscopes. We demonstrate micron-scale resolution, multi-spectral and volumetric imaging. In contrast to previous approaches, we show that this method has an improved acquisition speed that is sufficient to capture rapid neuronal dynamics in-vivo in rodents expressing a genetically encoded calcium indicator (GCaMP). Our results emphasize the potential of this technology in neuroscience applications and open up possibilities for cellular resolution imaging in previously unreachable brain regions. PMID:29675297
VizieR Online Data Catalog: Variability-selected AGN in Chandra DFS (Trevese+, 2008)
NASA Astrophysics Data System (ADS)
Trevese, D.; Boutsia, K.; Vagnetti, F.; Cappellaro, E.; Puccetti, S.
2008-11-01
Variability is a property shared by virtually all active galactic nuclei (AGNs), and was adopted as a criterion for their selection using data from multi epoch surveys. Low Luminosity AGNs (LLAGNs) are contaminated by the light of their host galaxies, and cannot therefore be detected by the usual colour techniques. For this reason, their evolution in cosmic time is poorly known. Consistency with the evolution derived from X-ray detected samples has not been clearly established so far, also because the low luminosity population consists of a mixture of different object types. LLAGNs can be detected by the nuclear optical variability of extended objects. Several variability surveys have been, or are being, conducted for the detection of supernovae (SNe). We propose to re-analyse these SNe data using a variability criterion optimised for AGN detection, to select a new AGN sample and study its properties. We analysed images acquired with the wide field imager at the 2.2m ESO/MPI telescope, in the framework of the STRESS supernova survey. We selected the AXAF field centred on the Chandra Deep Field South where, besides the deep X-ray survey, various optical data exist, originating in the EIS and COMBO-17 photometric surveys and the spectroscopic database of GOODS. (1 data file).
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
VizieR Online Data Catalog: WINGS: Deep optical phot. of 77 nearby clusters (Varela+, 2009)
NASA Astrophysics Data System (ADS)
Varela, J.; D'Onofrio, M.; Marmo, C.; Fasano, G.; Bettoni, D.; Cava, A.; Couch, J. W.; Dressler, A.; Kjaergaard, P.; Moles, M.; Pignatelli, E.; Poggianti, M. B.; Valentinuzzi, T.
2009-05-01
This is the second paper of a series devoted to the WIde Field Nearby Galaxy-cluster Survey (WINGS). WINGS is a long term project which is gathering wide-field, multi-band imaging and spectroscopy of galaxies in a complete sample of 77 X-ray selected, nearby clusters (0.04200deg). The main goal of this project is to establish a local reference for evolutionary studies of galaxies and galaxy clusters. This paper presents the optical (B,V) photometric catalogs of the WINGS sample and describes the procedures followed to construct them. We have paid special care to correctly treat the large extended galaxies (which includes the brightest cluster galaxies) and the reduction of the influence of the bright halos of very bright stars. We have constructed photometric catalogs based on wide-field images in B and V bands using SExtractor. Photometry has been performed on images in which large galaxies and halos of bright stars were removed after modeling them with elliptical isophotes. We publish deep optical photometric catalogs (90% complete at V21.7, which translates to ~ MV* + 6 at mean redshift), giving positions, geometrical parameters, and several total and aperture magnitudes for all the objects detected. For each field we have produced three catalogs containing galaxies, stars and objects of "unknown" classification (~16%). From simulations we found that the uncertainty of our photometry is quite dependent of the light profile of the objects with stars having the most robust photometry and de Vaucouleurs profiles showing higher uncertainties and also an additional bias of ~-0.2m. The star/galaxy classification of the bright objects (V<20) was checked visually making negligible the fraction of misclassified objects. For fainter objects, we found that simulations do not provide reliable estimates of the possible misclassification and therefore we have compared our data with that from deep counts of galaxies and star counts from models of our Galaxy. Both sets turned out to be consistent with our data within ~5% (in the ratio galaxies/total) up to V~24. Finally, we remark that the application of our special procedure to remove large halos improves the photometry of the large galaxies in our sample with respect to the use of blind automatic procedures and increases (~16%) the detection rate of objects projected onto them. (4 data files).
Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN
NASA Astrophysics Data System (ADS)
Hughes, Lloyd H.; Schmitt, Michael; Mou, Lichao; Wang, Yuanyuan; Zhu, Xiao Xiang
2018-05-01
In this letter, we propose a pseudo-siamese convolutional neural network (CNN) architecture that enables to solve the task of identifying corresponding patches in very-high-resolution (VHR) optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel network streams, a fully connected layer for the fusion of the features learned in each stream, and a loss function based on binary cross-entropy, we achieve a one-hot indication if two patches correspond or not. The network is trained and tested on an automatically generated dataset that is based on a deterministic alignment of SAR and optical imagery via previously reconstructed and subsequently co-registered 3D point clouds. The satellite images, from which the patches comprising our dataset are extracted, show a complex urban scene containing many elevated objects (i.e. buildings), thus providing one of the most difficult experimental environments. The achieved results show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development towards a generalized multi-sensor key-point matching procedure. Index Terms-synthetic aperture radar (SAR), optical imagery, data fusion, deep learning, convolutional neural networks (CNN), image matching, deep matching
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei; Zhou, Peicheng; Guo, Lei
2014-12-01
The rapid development of remote sensing technology has facilitated us the acquisition of remote sensing images with higher and higher spatial resolution, but how to automatically understand the image contents is still a big challenge. In this paper, we develop a practical and rotation-invariant framework for multi-class geospatial object detection and geographic image classification based on collection of part detectors (COPD). The COPD is composed of a set of representative and discriminative part detectors, where each part detector is a linear support vector machine (SVM) classifier used for the detection of objects or recurring spatial patterns within a certain range of orientation. Specifically, when performing multi-class geospatial object detection, we learn a set of seed-based part detectors where each part detector corresponds to a particular viewpoint of an object class, so the collection of them provides a solution for rotation-invariant detection of multi-class objects. When performing geographic image classification, we utilize a large number of pre-trained part detectors to discovery distinctive visual parts from images and use them as attributes to represent the images. Comprehensive evaluations on two remote sensing image databases and comparisons with some state-of-the-art approaches demonstrate the effectiveness and superiority of the developed framework.
Seo, Jeong Gi; Kwak, Jiyong; Um, Terry Taewoong; Rim, Tyler Hyungtaek
2017-01-01
Deep learning emerges as a powerful tool for analyzing medical images. Retinal disease detection by using computer-aided diagnosis from fundus image has emerged as a new method. We applied deep learning convolutional neural network by using MatConvNet for an automated detection of multiple retinal diseases with fundus photographs involved in STructured Analysis of the REtina (STARE) database. Dataset was built by expanding data on 10 categories, including normal retina and nine retinal diseases. The optimal outcomes were acquired by using a random forest transfer learning based on VGG-19 architecture. The classification results depended greatly on the number of categories. As the number of categories increased, the performance of deep learning models was diminished. When all 10 categories were included, we obtained results with an accuracy of 30.5%, relative classifier information (RCI) of 0.052, and Cohen’s kappa of 0.224. Considering three integrated normal, background diabetic retinopathy, and dry age-related macular degeneration, the multi-categorical classifier showed accuracy of 72.8%, 0.283 RCI, and 0.577 kappa. In addition, several ensemble classifiers enhanced the multi-categorical classification performance. The transfer learning incorporated with ensemble classifier of clustering and voting approach presented the best performance with accuracy of 36.7%, 0.053 RCI, and 0.225 kappa in the 10 retinal diseases classification problem. First, due to the small size of datasets, the deep learning techniques in this study were ineffective to be applied in clinics where numerous patients suffering from various types of retinal disorders visit for diagnosis and treatment. Second, we found that the transfer learning incorporated with ensemble classifiers can improve the classification performance in order to detect multi-categorical retinal diseases. Further studies should confirm the effectiveness of algorithms with large datasets obtained from hospitals. PMID:29095872
Computational ghost imaging using deep learning
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi
2018-04-01
Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.
Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.
2017-05-01
Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.
2003-07-25
This is the first Deep Imaging Survey image taken by NASA Galaxy Evolution Explorer. On June 22 and 23, 2003, the spacecraft obtained this near ultraviolet image of the Groth region by adding multiple orbits for a total exposure time of 14,000 seconds. Tens of thousands of objects can be identified in this picture. http://photojournal.jpl.nasa.gov/catalog/PIA04627
Going Deeper With Contextual CNN for Hyperspectral Image Classification.
Lee, Hyungtae; Kwon, Heesung
2017-10-01
In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.
Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method
NASA Astrophysics Data System (ADS)
Xin, L.
2018-04-01
Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.
Deep Keck u-Band Imaging of the Hubble Ultra Deep Field: A Catalog of z ~ 3 Lyman Break Galaxies
NASA Astrophysics Data System (ADS)
Rafelski, Marc; Wolfe, Arthur M.; Cooke, Jeff; Chen, Hsiao-Wen; Armandroff, Taft E.; Wirth, Gregory D.
2009-10-01
We present a sample of 407 z ~ 3 Lyman break galaxies (LBGs) to a limiting isophotal u-band magnitude of 27.6 mag in the Hubble Ultra Deep Field. The LBGs are selected using a combination of photometric redshifts and the u-band drop-out technique enabled by the introduction of an extremely deep u-band image obtained with the Keck I telescope and the blue channel of the Low Resolution Imaging Spectrometer. The Keck u-band image, totaling 9 hr of integration time, has a 1σ depth of 30.7 mag arcsec-2, making it one of the most sensitive u-band images ever obtained. The u-band image also substantially improves the accuracy of photometric redshift measurements of ~50% of the z ~ 3 LBGs, significantly reducing the traditional degeneracy of colors between z ~ 3 and z ~ 0.2 galaxies. This sample provides the most sensitive, high-resolution multi-filter imaging of reliably identified z ~ 3 LBGs for morphological studies of galaxy formation and evolution and the star formation efficiency of gas at high redshift.
NASA Astrophysics Data System (ADS)
Cepa, J.; Alfaro, E. J.; Castañeda, H. O.; Gallego, J.; González-Serrano, J. I.; González, J. J.; Jones, D. H.; Pérez-García, A. M.; Sánchez-Portal, M.
2007-06-01
OSIRIS is the Spanish Day One instrument for the GTC 10.4-m telescope. OSIRIS is a general purpose instrument for imaging, low-resolution long slit and multi-object spectroscopy (MOS). OSIRIS has a field of view of 8.6×8.6 arcminutes, which makes it ideal for deep surveys, and operates in the optical wavelength range from 365 through 1000nm. The main characteristic that makes OSIRIS unique amongst other instruments in 8-10m class telescopes is the use of Tunable Filters (Bland-Hawthorn & Jones 1998). These allow a continuous selection of both the central wavelength and the width, thus providing scanning narrow band imaging within the OSIRIS wavelength range. The combination of the large GTC aperture, large OSIRIS field of view and availability of the TFs makes OTELO a truly unique emission line survey.
VizieR Online Data Catalog: Double-peaked narrow lines in AGN. II. z<0.1 (Nevin+, 2016)
NASA Astrophysics Data System (ADS)
Nevin, R.; Comerford, J.; Muller-Sanchez, F.; Barrows, R.; Cooper, M.
2017-02-01
To determine the nature of 71 Type 2 AGNs with double-peaked [OIII] emission lines in SDSS that are at z<0.1 and further characterize their properties, we observe them using two complementary follow-up methods: optical long-slit spectroscopy and Jansky Very Large Array (VLA) radio observations. We use various spectrographs with similar pixel scales (Lick Kast Spectrograph; Palomar Double Spectrograph; MMT Blue Channel Spectrograph; APO Dual Imaging Spectrograph and Keck DEep Imaging Multi-Object Spectrograph. We use a 1200 lines/mm grating for all spectrographs; see table 1. In future work, we will combine our long-slit observations with the VLA data for the full sample of 71 galaxies (O. Muller-Sanchez+ 2016, in preparation). (4 data files).
NASA Astrophysics Data System (ADS)
D'Isanto, A.; Polsterer, K. L.
2018-01-01
Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes
NASA Technical Reports Server (NTRS)
Gardner, Jonathan P.
2007-01-01
The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts 2x3, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>lO, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (<50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.
Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes
NASA Technical Reports Server (NTRS)
Gardner, Jonathan F.; Barbier, L. M.; Barthelmy, S. D.; Cummings, J. R.; Fenimore, E. E.; Gehrels, N.; Hullinger, D. D.; Markwardt, C. B.; Palmer, D. M.; Parsons, A. M.;
2006-01-01
The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts 2-6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 27 microns. In addition to JWST s ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.
Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes
NASA Technical Reports Server (NTRS)
Gardner, Jonathan P.
2007-01-01
The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts z>6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (<50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibragimov, B; Pernus, F; Strojan, P
Purpose: Accurate and efficient delineation of tumor target and organs-at-risks is essential for the success of radiotherapy. In reality, despite of decades of intense research efforts, auto-segmentation has not yet become clinical practice. In this study, we present, for the first time, a deep learning-based classification algorithm for autonomous segmentation in head and neck (HaN) treatment planning. Methods: Fifteen HN datasets of CT, MR and PET images with manual annotation of organs-at-risk (OARs) including spinal cord, brainstem, optic nerves, chiasm, eyes, mandible, tongue, parotid glands were collected and saved in a library of plans. We also have ten super-resolution MRmore » images of the tongue area, where the genioglossus and inferior longitudinalis tongue muscles are defined as organs of interest. We applied the concepts of random forest- and deep learning-based object classification for automated image annotation with the aim of using machine learning to facilitate head and neck radiotherapy planning process. In this new paradigm of segmentation, random forests were used for landmark-assisted segmentation of super-resolution MR images. Alternatively to auto-segmentation with random forest-based landmark detection, deep convolutional neural networks were developed for voxel-wise segmentation of OARs in single and multi-modal images. The network consisted of three pairs of convolution and pooing layer, one RuLU layer and a softmax layer. Results: We present a comprehensive study on using machine learning concepts for auto-segmentation of OARs and tongue muscles for the HaN radiotherapy planning. An accuracy of 81.8% in terms of Dice coefficient was achieved for segmentation of genioglossus and inferior longitudinalis tongue muscles. Preliminary results of OARs regimentation also indicate that deep-learning afforded an unprecedented opportunities to improve the accuracy and robustness of radiotherapy planning. Conclusion: A novel machine learning framework has been developed for image annotation and structure segmentation. Our results indicate the great potential of deep learning in radiotherapy treatment planning.« less
Robust hepatic vessel segmentation using multi deep convolution network
NASA Astrophysics Data System (ADS)
Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei
2017-03-01
Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.
2017-01-01
Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the geometric and biological complexity. To address this problem we introduce the element-specific persistent homology (ESPH) method. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains important biological information via a multichannel image-like representation. This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the deep learning limitations from small and noisy training sets, we propose a multi-task multichannel topological convolutional neural network (MM-TCNN). We demonstrate that TopologyNet outperforms the latest methods in the prediction of protein-ligand binding affinities, mutation induced globular protein folding free energy changes, and mutation induced membrane protein folding free energy changes. Availability: weilab.math.msu.edu/TDL/ PMID:28749969
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
High-throughput isotropic mapping of whole mouse brain using multi-view light-sheet microscopy
NASA Astrophysics Data System (ADS)
Nie, Jun; Li, Yusha; Zhao, Fang; Ping, Junyu; Liu, Sa; Yu, Tingting; Zhu, Dan; Fei, Peng
2018-02-01
Light-sheet fluorescence microscopy (LSFM) uses an additional laser-sheet to illuminate selective planes of the sample, thereby enabling three-dimensional imaging at high spatial-temporal resolution. These advantages make LSFM a promising tool for high-quality brain visualization. However, even by the use of LSFM, the spatial resolution remains insufficient to resolve the neural structures across a mesoscale whole mouse brain in three dimensions. At the same time, the thick-tissue scattering prevents a clear observation from the deep of brain. Here we use multi-view LSFM strategy to solve this challenge, surpassing the resolution limit of standard light-sheet microscope under a large field-of-view (FOV). As demonstrated by the imaging of optically-cleared mouse brain labelled with thy1-GFP, we achieve a brain-wide, isotropic cellular resolution of 3μm. Besides the resolution enhancement, multi-view braining imaging can also recover complete signals from deep tissue scattering and attenuation. The identification of long distance neural projections across encephalic regions can be identified and annotated as a result.
Optimizing a neural network for detection of moving vehicles in video
NASA Astrophysics Data System (ADS)
Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri
2017-10-01
In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.
NASA Astrophysics Data System (ADS)
Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan
2014-03-01
Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
NASA Astrophysics Data System (ADS)
Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei
2018-04-01
Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.
Machine Learning in Medical Imaging.
Giger, Maryellen L
2018-03-01
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine. Copyright © 2018. Published by Elsevier Inc.
The MIND PALACE: A Multi-Spectral Imaging and Spectroscopy Database for Planetary Science
NASA Astrophysics Data System (ADS)
Eshelman, E.; Doloboff, I.; Hara, E. K.; Uckert, K.; Sapers, H. M.; Abbey, W.; Beegle, L. W.; Bhartia, R.
2017-12-01
The Multi-Instrument Database (MIND) is the web-based home to a well-characterized set of analytical data collected by a suite of deep-UV fluorescence/Raman instruments built at the Jet Propulsion Laboratory (JPL). Samples derive from a growing body of planetary surface analogs, mineral and microbial standards, meteorites, spacecraft materials, and other astrobiologically relevant materials. In addition to deep-UV spectroscopy, datasets stored in MIND are obtained from a variety of analytical techniques obtained over multiple spatial and spectral scales including electron microscopy, optical microscopy, infrared spectroscopy, X-ray fluorescence, and direct fluorescence imaging. Multivariate statistical analysis techniques, primarily Principal Component Analysis (PCA), are used to guide interpretation of these large multi-analytical spectral datasets. Spatial co-referencing of integrated spectral/visual maps is performed using QGIS (geographic information system software). Georeferencing techniques transform individual instrument data maps into a layered co-registered data cube for analysis across spectral and spatial scales. The body of data in MIND is intended to serve as a permanent, reliable, and expanding database of deep-UV spectroscopy datasets generated by this unique suite of JPL-based instruments on samples of broad planetary science interest.
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
Near Infrared Imaging of the Hubble Deep Field with Keck Telescope
NASA Technical Reports Server (NTRS)
Hogg, David W.; Neugebauer, G.; Armus, Lee; Matthews, K.; Pahre, Michael A.; Soifer, B. T.; Weinberger, A. J.
1997-01-01
Two deep K-band (2.2 micrometer) images, with point-source detection limits of K=25.2 mag (one sigma), taken with the Keck Telescope in subfields of the Hubble Deep Field, are presented and analyzed. A sample of objects to K=24 mag is constructed and V(sub 606)- I(sub 814) and I(sub 814)-K colors are measured. By stacking visually selected objects, mean I(sub 814)-K colors can be measured to very faint levels, the mean I(sub 814)-K color is constant with apparent magnitude down to V(sub 606)=28 mag.
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
NASA Astrophysics Data System (ADS)
Pirpinia, Kleopatra; Bosman, Peter A. N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2015-03-01
The use of gradient information is well-known to be highly useful in single-objective optimization-based image registration methods. However, its usefulness has not yet been investigated for deformable image registration from a multi-objective optimization perspective. To this end, within a previously introduced multi-objective optimization framework, we use a smooth B-spline-based dual-dynamic transformation model that allows us to derive gradient information analytically, while still being able to account for large deformations. Within the multi-objective framework, we previously employed a powerful evolutionary algorithm (EA) that computes and advances multiple outcomes at once, resulting in a set of solutions (a so-called Pareto front) that represents efficient trade-offs between the objectives. With the addition of the B-spline-based transformation model, we studied the usefulness of gradient information in multiobjective deformable image registration using three different optimization algorithms: the (gradient-less) EA, a gradientonly algorithm, and a hybridization of these two. We evaluated the algorithms to register highly deformed images: 2D MRI slices of the breast in prone and supine positions. Results demonstrate that gradient-based multi-objective optimization significantly speeds up optimization in the initial stages of optimization. However, allowing sufficient computational resources, better results could still be obtained with the EA. Ultimately, the hybrid EA found the best overall approximation of the optimal Pareto front, further indicating that adding gradient-based optimization for multiobjective optimization-based deformable image registration can indeed be beneficial
Active Galactic Nuclei, Quasars, BL Lac Objects and X-Ray Background
NASA Technical Reports Server (NTRS)
Mushotzky, Richard (Technical Monitor); Elvis, Martin
2005-01-01
The XMM COSMOS survey is producing the large surface density of X-ray sources anticipated. The first batch of approx. 200 sources is being studied in relation to the large scale structure derived from deep optical/near-IR imaging from Subaru and CFHT. The photometric redshifts from the opt/IR imaging program allow a first look at structure vs. redshift, identifying high z clusters. A consortium of SAO, U. Arizona and the Carnegie Institute of Washington (Pasadena) has started a large program using the 6.5meter Magellan telescopes in Chile with the prime objective of identifying the XMM X-ray sources in the COSMOS field. The first series of observing runs using the new IMACS multi-slit spectrograph on Magellan will take place in January and February of 2005. Some 300 spectra per field will be taken, including 70%-80% of the XMM sources in each field. The four first fields cover the center of the COSMOS field. A VLT consortium is set to obtain bulk redshifts of the field galaxies. The added accuracy of the spectroscopic redshifts over the photo-z's will allow much lower density structures to be seen, voids and filaments. The association of X-ray selected AGNs, and quasars with these filaments, is a major motivation for our studies. Comparison to the deep VLA radio data now becoming available is about to begin.
A survey on deep learning in medical image analysis.
Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A W M; van Ginneken, Bram; Sánchez, Clara I
2017-12-01
Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. Copyright © 2017 Elsevier B.V. All rights reserved.
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery
NASA Astrophysics Data System (ADS)
Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.
2016-12-01
Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.
Learned filters for object detection in multi-object visual tracking
NASA Astrophysics Data System (ADS)
Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David
2016-05-01
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
VizieR Online Data Catalog: Imaging and spectroscopy in Lynx W (Jorgensen+, 2014)
NASA Astrophysics Data System (ADS)
Jorgensen, I.; Chiboucas, K.; Toft, S.; Bergmann, M.; Zirm, A.; Schiavon, R. P.; Grutzbauch, R.
2017-01-01
Ground-based imaging of RX J0848.6+4453 was obtained primarily to show the performance gain provided by replacing the original E2V charge-coupled devices (E2V CCDs) in Gemini Multi-Object Spectrograph on Gemini North (GMOS-N) with E2V Deep Depletion CCDs (E2V DD CCDs). This replacement was done in 2011 October. Imaging of RX J0848.6+4453 was obtained with the original E2V CCDs in 2011 October (UT 2011 Oct 1 to 2011 Oct 2; Program ID: GN-2011B-DD-3) and repeated with the E2V DD CCDs in 2011 November. The imaging was done in the z' filter. For the observations with the original E2V CCDs the total exposure time was 60 minutes (obtained as 12 five-minute exposures) and the co-added image had an image quality of FWHM=0.52'' measured from point sources in the field. For the E2V DD CCDs a total exposure time of 55 minutes was obtained and the resulting image quality was FWHM=0.51''. Imaging of RX J0848.6+4453 was also obtained with Hubble Space Telescope /Advanced Camera for Surveys (HST/ACS using the filters F775W and F850LP) under the program ID 9919. The spectroscopic observations were obtained in multi-object spectroscopic (MOS) mode with GMOS-N (UT 2011 Nov 24 to 2012 Jan 4, Program ID: GN-2011B-DD-5; UT 2013 Mar 9 to 2013 May 18, Program ID: GN-2013A-Q-65). Table10 lists the photometric parameters for the spectroscopic sample as derived from the HST/ACS observations in F850LP and F775W. Tables 11 and 12 list the results from the template fitting and the derived line strengths, respectively. (3 data files).
Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing
2017-03-01
Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
Cheng, Bingbing; Bandi, Venugopal; Wei, Ming-Yuan; Pei, Yanbo; D’Souza, Francis; Nguyen, Kytai T.; Hong, Yi; Yuan, Baohong
2016-01-01
For many years, investigators have sought after high-resolution fluorescence imaging in centimeter-deep tissue because many interesting in vivo phenomena—such as the presence of immune system cells, tumor angiogenesis, and metastasis—may be located deep in tissue. Previously, we developed a new imaging technique to achieve high spatial resolution in sub-centimeter deep tissue phantoms named continuous-wave ultrasound-switchable fluorescence (CW-USF). The principle is to use a focused ultrasound wave to externally and locally switch on and off the fluorophore emission from a small volume (close to ultrasound focal volume). By making improvements in three aspects of this technique: excellent near-infrared USF contrast agents, a sensitive frequency-domain USF imaging system, and an effective signal processing algorithm, for the first time this study has achieved high spatial resolution (~ 900 μm) in 3-centimeter-deep tissue phantoms with high signal-to-noise ratio (SNR) and high sensitivity (3.4 picomoles of fluorophore in a volume of 68 nanoliters can be detected). We have achieved these results in both tissue-mimic phantoms and porcine muscle tissues. We have also demonstrated multi-color USF to image and distinguish two fluorophores with different wavelengths, which might be very useful for simultaneously imaging of multiple targets and observing their interactions in the future. This work has opened the door for future studies of high-resolution centimeter-deep tissue fluorescence imaging. PMID:27829050
Cheng, Bingbing; Bandi, Venugopal; Wei, Ming-Yuan; Pei, Yanbo; D'Souza, Francis; Nguyen, Kytai T; Hong, Yi; Yuan, Baohong
2016-01-01
For many years, investigators have sought after high-resolution fluorescence imaging in centimeter-deep tissue because many interesting in vivo phenomena-such as the presence of immune system cells, tumor angiogenesis, and metastasis-may be located deep in tissue. Previously, we developed a new imaging technique to achieve high spatial resolution in sub-centimeter deep tissue phantoms named continuous-wave ultrasound-switchable fluorescence (CW-USF). The principle is to use a focused ultrasound wave to externally and locally switch on and off the fluorophore emission from a small volume (close to ultrasound focal volume). By making improvements in three aspects of this technique: excellent near-infrared USF contrast agents, a sensitive frequency-domain USF imaging system, and an effective signal processing algorithm, for the first time this study has achieved high spatial resolution (~ 900 μm) in 3-centimeter-deep tissue phantoms with high signal-to-noise ratio (SNR) and high sensitivity (3.4 picomoles of fluorophore in a volume of 68 nanoliters can be detected). We have achieved these results in both tissue-mimic phantoms and porcine muscle tissues. We have also demonstrated multi-color USF to image and distinguish two fluorophores with different wavelengths, which might be very useful for simultaneously imaging of multiple targets and observing their interactions in the future. This work has opened the door for future studies of high-resolution centimeter-deep tissue fluorescence imaging.
A deep staring campaign in the σ Orionis cluster. Variability in substellar members
NASA Astrophysics Data System (ADS)
Elliott, P.; Scholz, A.; Jayawardhana, R.; Eislöffel, J.; Hébrard, E. M.
2017-12-01
Context. The young star cluster near σ Orionis is one of the primary environments to study the properties of young brown dwarfs down to masses comparable to those of giant planets. Aims: Deep optical imaging is used to study time-domain properties of young brown dwarfs over typical rotational timescales and to search for new substellar and planetary-mass cluster members. Methods: We used the Visible Multi Object Spectrograph (VIMOS) at the Very Large Telescope (VLT) to monitor a 24'× 16' field in the I-band. We stared at the same area over a total integration time of 21 h, spanning three observing nights. Using the individual images from this run we investigated the photometric time series of nine substellar cluster members with masses from 10 to 60 MJup. The deep stacked image shows cluster members down to ≈5 MJup. We searched for new planetary-mass objects by combining our deep I-band photometry with public J-band magnitudes and by examining the nearby environment of known very low mass members for possible companions. Results: We find two brown dwarfs, with significantly variable, aperiodic light curves, both with masses around 50 MJup, one of which was previously unknown to be variable. The physical mechanism responsible for the observed variability is likely to be different for the two objects. The variability of the first object, a single-lined spectroscopic binary, is most likely linked to its accretion disc; the second may be caused by variable extinction by large grains. We find five new candidate members from the colour-magnitude diagram and three from a search for companions within 2000 au. We rule all eight sources out as potential members based on non-stellar shape and/or infrared colours. The I-band photometry is made available as a public dataset. Conclusions: We present two variable brown dwarfs. One is consistent with ongoing accretion, the other exhibits apparent transient variability without the presence of an accretion disc. Our analysis confirms the existing census of substellar cluster members down to ≈7 MJup. The zero result from our companion search agrees with the low occurrence rate of wide companions to brown dwarfs found in other works. Based on observations made with ESO Telescopes at the Paranal Observatory under programme ID 078.C-0042.Full Table B.1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A66
VizieR Online Data Catalog: Merging galaxies with tidal tails in COSMOS to z=1 (Wen+, 2016)
NASA Astrophysics Data System (ADS)
Wen, Z. Z.; Zheng, X. Z.
2017-02-01
Our study utilizes the public data and catalogs from multi-band deep surveys of the COSMOS field. The UltraVISTA survey (McCracken+ 2012, J/A+A/544/A156) provides ultra-deep near-IR imaging observations of this field in the Y,J,H, and Ks-band, as well as a narrow band (NB118). The HST/ACS I-band imaging data are publicly available, allowing us to measure morphologies in the rest-frame optical for galaxies at z<=1. The HST/ACS I-band images reach a 5σ depth of 27.2 magnitude for point sources. (1 data file).
Tropical Cyclone Intensity Estimation Using Deep Convolutional Neural Networks
NASA Technical Reports Server (NTRS)
Maskey, Manil; Cecil, Dan; Ramachandran, Rahul; Miller, Jeffrey J.
2018-01-01
Estimating tropical cyclone intensity by just using satellite image is a challenging problem. With successful application of the Dvorak technique for more than 30 years along with some modifications and improvements, it is still used worldwide for tropical cyclone intensity estimation. A number of semi-automated techniques have been derived using the original Dvorak technique. However, these techniques suffer from subjective bias as evident from the most recent estimations on October 10, 2017 at 1500 UTC for Tropical Storm Ophelia: The Dvorak intensity estimates ranged from T2.3/33 kt (Tropical Cyclone Number 2.3/33 knots) from UW-CIMSS (University of Wisconsin-Madison - Cooperative Institute for Meteorological Satellite Studies) to T3.0/45 kt from TAFB (the National Hurricane Center's Tropical Analysis and Forecast Branch) to T4.0/65 kt from SAB (NOAA/NESDIS Satellite Analysis Branch). In this particular case, two human experts at TAFB and SAB differed by 20 knots in their Dvorak analyses, and the automated version at the University of Wisconsin was 12 knots lower than either of them. The National Hurricane Center (NHC) estimates about 10-20 percent uncertainty in its post analysis when only satellite based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to tropical cyclone intensity. This study aims to utilize deep learning, the current state of the art in pattern recognition and image recognition, to address the need for an automated and objective tropical cyclone intensity estimation. Deep learning is a multi-layer neural network consisting of several layers of simple computational units. It learns discriminative features without relying on a human expert to identify which features are important. Our study mainly focuses on convolutional neural network (CNN), a deep learning algorithm, to develop an objective tropical cyclone intensity estimation. CNN is a supervised learning algorithm requiring a large number of training data. Since the archives of intensity data and tropical cyclone centric satellite images is openly available for use, the training data is easily created by combining the two. Results, case studies, prototypes, and advantages of this approach will be discussed.
Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes
NASA Technical Reports Server (NTRS)
Gardner, Jonathan P.
2009-01-01
The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts z greater than 6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z greater than 10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (less than 50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth-Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems, and discuss recent progress in constructing the observatory.
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V
2015-08-24
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.
NASA Astrophysics Data System (ADS)
Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.
2013-09-01
Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.
Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin
2018-06-22
Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.
Variability-selected active galactic nuclei from supernova search in the Chandra deep field south
NASA Astrophysics Data System (ADS)
Trevese, D.; Boutsia, K.; Vagnetti, F.; Cappellaro, E.; Puccetti, S.
2008-09-01
Context: Variability is a property shared by virtually all active galactic nuclei (AGNs), and was adopted as a criterion for their selection using data from multi epoch surveys. Low Luminosity AGNs (LLAGNs) are contaminated by the light of their host galaxies, and cannot therefore be detected by the usual colour techniques. For this reason, their evolution in cosmic time is poorly known. Consistency with the evolution derived from X-ray detected samples has not been clearly established so far, also because the low luminosity population consists of a mixture of different object types. LLAGNs can be detected by the nuclear optical variability of extended objects. Aims: Several variability surveys have been, or are being, conducted for the detection of supernovae (SNe). We propose to re-analyse these SNe data using a variability criterion optimised for AGN detection, to select a new AGN sample and study its properties. Methods: We analysed images acquired with the wide field imager at the 2.2 m ESO/MPI telescope, in the framework of the STRESS supernova survey. We selected the AXAF field centred on the Chandra Deep Field South where, besides the deep X-ray survey, various optical data exist, originating in the EIS and COMBO-17 photometric surveys and the spectroscopic database of GOODS. Results: We obtained a catalogue of 132 variable AGN candidates. Several of the candidates are X-ray sources. We compare our results with an HST variability study of X-ray and IR detected AGNs, finding consistent results. The relatively high fraction of confirmed AGNs in our sample (60%) allowed us to extract a list of reliable AGN candidates for spectroscopic follow-up observations. Table [see full text] is only available in electronic form at http://www.aanda.org
Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT
NASA Astrophysics Data System (ADS)
Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.
2009-01-01
We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.
UTE bi-component analysis of T2* relaxation in articular cartilage
Shao, H.; Chang, E.Y.; Pauli, C.; Zanganeh, S.; Bae, W.; Chung, C.B.; Tang, G.; Du, J.
2015-01-01
SUMMARY Objectives To determine T2* relaxation in articular cartilage using ultrashort echo time (UTE) imaging and bi-component analysis, with an emphasis on the deep radial and calcified cartilage. Methods Ten patellar samples were imaged using two-dimensional (2D) UTE and Car-Purcell-Meiboom-Gill (CPMG) sequences. UTE images were fitted with a bi-component model to calculate T2* and relative fractions. CPMG images were fitted with a single-component model to calculate T2. The high signal line above the subchondral bone was regarded as the deep radial and calcified cartilage. Depth and orientation dependence of T2*, fraction and T2 were analyzed with histopathology and polarized light microscopy (PLM), confirming normal regions of articular cartilage. An interleaved multi-echo UTE acquisition scheme was proposed for in vivo applications (n = 5). Results The short T2* values remained relatively constant across the cartilage depth while the long T2* values and long T2* fractions tended to increase from subchondral bone to the superficial cartilage. Long T2*s and T2s showed significant magic angle effect for all layers of cartilage from the medial to lateral facets, while the short T2* values and T2* fractions are insensitive to the magic angle effect. The deep radial and calcified cartilage showed a mean short T2* of 0.80 ± 0.05 ms and short T2* fraction of 39.93 ± 3.05% in vitro, and a mean short T2* of 0.93 ± 0.58 ms and short T2* fraction of 35.03 ± 4.09% in vivo. Conclusion UTE bi-component analysis can characterize the short and long T2* values and fractions across the cartilage depth, including the deep radial and calcified cartilage. The short T2* values and T2* fractions are magic angle insensitive. PMID:26382110
Kothapalli, Sri-Rajasekhar; Ma, Te-Jen; Vaithilingam, Srikant; Oralkan, Ömer
2014-01-01
In this paper, we demonstrate 3-D photoacoustic imaging (PAI) of light absorbing objects embedded as deep as 5 cm inside strong optically scattering phantoms using a miniaturized (4 mm × 4 mm × 500 µm), 2-D capacitive micromachined ultrasonic transducer (CMUT) array of 16 × 16 elements with a center frequency of 5.5 MHz. Two-dimensional tomographic images and 3-D volumetric images of the objects placed at different depths are presented. In addition, we studied the sensitivity of CMUT-based PAI to the concentration of indocyanine green dye at 5 cm depth inside the phantom. Under optimized experimental conditions, the objects at 5 cm depth can be imaged with SNR of about 35 dB and a spatial resolution of approximately 500 µm. Results demonstrate that CMUTs with integrated front-end amplifier circuits are an attractive choice for achieving relatively high depth sensitivity for PAI. PMID:22249594
The Great Easter Egg Hunt: The Void's Incredible Richness
NASA Astrophysics Data System (ADS)
2006-04-01
An image made of about 300 million pixels is being released by ESO, based on more than 64 hours of observations with the Wide-Field Camera on the 2.2m telescope at La Silla (Chile). The image covers an 'empty' region of the sky five times the size of the full moon, opening an exceptionally clear view towards the most distant part of our universe. It reveals objects that are 100 million times fainter than what the unaided eye can see. Easter is in many countries a time of great excitement for children who are on the big hunt for chocolate eggs, hidden all about the places. Astronomers, however, do not need to wait this special day to get such an excitement: it is indeed daily that they look for faraway objects concealed in deep images of the sky. And as with chocolate eggs, deep sky objects, such as galaxies, quasars or gravitational lenses, come in the wildest variety of colours and shapes. ESO PR Photo 11/06 ESO PR Photo 14a/06 The Deep 3 'Empty' Field The image presented here is one of such very deep image of the sky. It is the combination of 714 frames for a total exposure time of 64.5 hours obtained through four different filters (B, V, R, and I)! It consists of four adjacent Wide-Field Camera pointings (each 33x34 arcmin), covering a total area larger than one square degree. Yet, if you were to look at this large portion of the firmament with the unaided eye, you would just see... nothing. The area, named Deep 3, was indeed chosen to be a random but empty, high galactic latitude field, positioned in such a way that it can be observed from the La Silla observatory all over the year. Together with two other regions, Deep 1 and Deep 2, Deep 3 is part of the Deep Public Survey (DPS), based on ideas submitted by the ESO community and covering a total sky area of 3 square degrees. Deep 1 and Deep 2 were selected because they overlapped with regions of other scientific interest. For instance, Deep 1 was chosen to complement the deep ATESP radio survey carried out with the Australia Telescope Compact Array (ATCA) covering the region surveyed by the ESO Slice Project, while Deep 2 included the CDF-S field. Each region is observed in the optical, with the WFI, and in the near-infrared, with SOFI on the 3.5-m New Technology Telescope also at La Silla. Deep 3 is located in the Crater ('The Cup'), a southern constellation with very little interest (the brightest star is of fourth magnitude, i.e. only a factor six brighter than what a keen observer can see with the unaided eye), in between the Virgo, Corvus and Hydra constellations. Such comparatively empty fields provide an unusually clear view towards the distant regions in the Universe and thus open a window towards the earliest cosmic times. The deep imaging data can for example be used to pre-select objects by colour for follow-up spectroscopy with ESO's Very Large Telescope instruments. ESO PR Photo 11/06 ESO PR Photo 14b/06 Galaxy ESO 570-19 and Variable Star UW Crateris But being empty is only a relative notion. True, on the whole image, the SIMBAD Astronomical database references less than 50 objects, clearly a tiny number compared to the myriad of anonymous stars and galaxies that can be seen in the deep image obtained by the Survey! Among the objects catalogued is the galaxy visible in the top middle right (see also PR Photo 14b/06) and named ESO 570-19. Located 60 million light-years away, this spiral galaxy is the largest in the image. It is located not so far - on the image! - from the brightest star in the field, UW Crateris. This red giant is a variable star that is about 8 times fainter than what the unaided eye can see. The second and third brightest stars in this image are visible in the lower far right and in the lower middle left. The first is a star slightly more massive than the Sun, HD 98081, while the other is another red giant, HD 98507. ESO PR Photo 11/06 ESO PR Photo 14c/06 The DPS Deep 3 Field (Detail) In the image, a vast number of stars and galaxies are to be studied and compared. They come in a variety of colours and the stars form amazing asterisms (a group of stars forming a pattern), while the galaxies, which are to be counted by the tens of thousands come in different shapes and some even interact or form part of a cluster. The image and the other associated data will certainly provide a plethora of new results in the years to come. In the meantime, why don't you explore the image with the zoom-in facility, and start your own journey into infinity? Just be careful not to get lost. And remember: don't eat too many of these chocolate eggs! High resolution images and their captions are available on this page.
An Automatic Detection System of Lung Nodule Based on Multi-Group Patch-Based Deep Learning Network.
Jiang, Hongyang; Ma, He; Qian, Wei; Gao, Mengdi; Li, Yan
2017-07-14
High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography (CT) image transformation, the lung nodule segmentation and the feature extraction, to construct a whole CADe system. It is difficult for those schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multi-group patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks (CNN) model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multi-group patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.
Multidirectional Cosmic Ray Ion Detector for Deep Space CubeSats
NASA Technical Reports Server (NTRS)
Wrbanek, John D.; Wrbanek, Susan Y.
2016-01-01
NASA Glenn Research Center has proposed a CubeSat-based instrument to study solar and cosmic ray ions in lunar orbit or deep space. The objective of Solar Proton Anisotropy and Galactic cosmic ray High Energy Transport Instrument (SPAGHETI) is to provide multi-directional ion data to further understand anisotropies in SEP and GCR flux.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Cha, Kenny H.; Richter, Caleb D.
2017-12-01
Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the ‘knowledge’ learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p = 0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
Overview of deep learning in medical imaging.
Suzuki, Kenji
2017-09-01
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Multi-object segmentation using coupled nonparametric shape and relative pose priors
NASA Astrophysics Data System (ADS)
Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep
2009-02-01
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.
Nonparametric Representations for Integrated Inference, Control, and Sensing
2015-10-01
Learning (ICML), 2013. [20] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep ...unlimited. Multi-layer feature learning “SuperVision” Convolutional Neural Network (CNN) ImageNet Classification with Deep Convolutional Neural Networks...to develop a new framework for autonomous operations that will extend the state of the art in distributed learning and modeling from data, and
Colorizing SENTINEL-1 SAR Images Using a Variational Autoencoder Conditioned on SENTINEL-2 Imagery
NASA Astrophysics Data System (ADS)
Schmitt, M.; Hughes, L. H.; Körner, M.; Zhu, X. X.
2018-05-01
In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.
Deep features for efficient multi-biometric recognition with face and ear images
NASA Astrophysics Data System (ADS)
Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng
2017-07-01
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
Multimodal Deep Autoencoder for Human Pose Recovery.
Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng
2015-12-01
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Armstrong, Roy A.; Singh, Hanumant
2006-09-01
Optical imaging of coral reefs and other benthic communities present below one attenuation depth, the limit of effective airborne and satellite remote sensing, requires the use of in situ platforms such as autonomous underwater vehicles (AUVs). The Seabed AUV, which was designed for high-resolution underwater optical and acoustic imaging, was used to characterize several deep insular shelf reefs of Puerto Rico and the US Virgin Islands using digital imagery. The digital photo transects obtained by the Seabed AUV provided quantitative data on living coral, sponge, gorgonian, and macroalgal cover as well as coral species richness and diversity. Rugosity, an index of structural complexity, was derived from the pencil-beam acoustic data. The AUV benthic assessments could provide the required information for selecting unique areas of high coral cover, biodiversity and structural complexity for habitat protection and ecosystem-based management. Data from Seabed sensors and related imaging technologies are being used to conduct multi-beam sonar surveys, 3-D image reconstruction from a single camera, photo mosaicking, image based navigation, and multi-sensor fusion of acoustic and optical data.
NASA Astrophysics Data System (ADS)
Castander, F. J.
The Dark UNiverse Explorer (DUNE) is a wide-field imaging mission concept whose primary goal is the study of dark energy and dark matter with unprecedented precision. To this end, DUNE is optimised for weak gravitational lensing, and also uses complementary cosmological probes, such as baryonic oscillations, the integrated Sachs-Wolf effect, and cluster counts. Besides its observational cosmology goals, the mission capabilities of DUNE allow the study of galaxy evolution, galactic structure and the demographics of Earth-mass planets. DUNE is a medium class mission consisting of a 1.2m telescope designed to carry out an all-sky survey in one visible and three NIR bands. The final data of the DUNE mission will form a unique legacy for the astronomy community. DUNE has been selected jointly with SPACE for an ESA Assessment phase which has led to the Euclid merged mission concept which combines wide-field deep imaging with low resolution multi-object spectroscopy.
NIRcam-NIRSpec GTO Observations of Galaxy Evolution
NASA Astrophysics Data System (ADS)
Rieke, Marcia J.; Ferruit, Pierre; Alberts, Stacey; Bunker, Andrew; Charlot, Stephane; Chevallard, Jacopo; Dressler, Alan; Egami, Eiichi; Eisenstein, Daniel; Endsley, Ryan; Franx, Marijn; Frye, Brenda L.; Hainline, Kevin; Jakobsen, Peter; Lake, Emma Curtis; Maiolino, Roberto; Rix, Hans-Walter; Robertson, Brant; Stark, Daniel; Williams, Christina; Willmer, Christopher; Willott, Chris J.
2017-06-01
The NIRSpec and and NIRCam GTO Teams are planning a joint imaging and spectroscopic study of the high redshift universe. By virtue of planning a joint program which includes medium and deep near- and mid-infrared imaging surveys and multi-object spectroscopy (MOS) of sources in the same fields, we have learned much about planning observing programs for each of the instruments and using them in parallel mode to maximize photon collection time. The design and rationale for our joint program will be explored in this talk with an emphasis on why we have chosen particular suites of filters and spectroscopic resolutions, why we have chosen particular exposure patterns, and how we have designed the parallel observations. The actual observations that we intend on executing will serve as examples of how to layout mosaics and MOS observations to maximize observing efficiency for surveys with JWST.
Optical cryptography with biometrics for multi-depth objects.
Yan, Aimin; Wei, Yang; Hu, Zhijuan; Zhang, Jingtao; Tsang, Peter Wai Ming; Poon, Ting-Chung
2017-10-11
We propose an optical cryptosystem for encrypting images of multi-depth objects based on the combination of optical heterodyne technique and fingerprint keys. Optical heterodyning requires two optical beams to be mixed. For encryption, each optical beam is modulated by an optical mask containing either the fingerprint of the person who is sending, or receiving the image. The pair of optical masks are taken as the encryption keys. Subsequently, the two beams are used to scan over a multi-depth 3-D object to obtain an encrypted hologram. During the decryption process, each sectional image of the 3-D object is recovered by convolving its encrypted hologram (through numerical computation) with the encrypted hologram of a pinhole image that is positioned at the same depth as the sectional image. Our proposed method has three major advantages. First, the lost-key situation can be avoided with the use of fingerprints as the encryption keys. Second, the method can be applied to encrypt 3-D images for subsequent decrypted sectional images. Third, since optical heterodyning scanning is employed to encrypt a 3-D object, the optical system is incoherent, resulting in negligible amount of speckle noise upon decryption. To the best of our knowledge, this is the first time optical cryptography of 3-D object images has been demonstrated in an incoherent optical system with biometric keys.
Seifi, Payam; Epel, Boris; Sundramoorthy, Subramanian V.; Mailer, Colin; Halpern, Howard J.
2011-01-01
Purpose: Electron spin-echo (ESE) oxygen imaging is a new and evolving electron paramagnetic resonance (EPR) imaging (EPRI) modality that is useful for physiological in vivo applications, such as EPR oxygen imaging (EPROI), with potential application to imaging of multicentimeter objects as large as human tumors. A present limitation on the size of the object to be imaged at a given resolution is the frequency bandwidth of the system, since the location is encoded as a frequency offset in ESE imaging. The authors’ aim in this study was to demonstrate the object size advantage of the multioffset bandwidth extension technique.Methods: The multiple-stepped Zeeman field offset (or simply multi-B) technique was used for imaging of an 8.5-cm-long phantom containing a narrow single line triaryl methyl compound (trityl) solution at the 250 MHz imaging frequency. The image is compared to a standard single-field ESE image of the same phantom.Results: For the phantom used in this study, transverse relaxation (T2e) electron spin-echo (ESE) images from multi-B acquisition are more uniform, contain less prominent artifacts, and have a better signal to noise ratio (SNR) compared to single-field T2e images.Conclusions: The multi-B method is suitable for imaging of samples whose physical size restricts the applicability of the conventional single-field ESE imaging technique. PMID:21815379
Miller, Sean J; Rothstein, Jeffrey D
2017-01-01
Pathological analyses and methodology has recently undergone a dramatic revolution. With the creation of tissue clearing methods such as CLARITY and CUBIC, groups can now achieve complete transparency in tissue samples in nano-porous hydrogels. Cleared tissue is then imagined in a semi-aqueous medium that matches the refractive index of the objective being used. However, one major challenge is the ability to control tissue movement during imaging and to relocate precise locations post sequential clearing and re-staining. Using 3D printers, we designed tissue molds that fit precisely around the specimen being imaged. First, images are taken of the specimen, followed by importing and design of a structural mold, then printed with affordable plastics by a 3D printer. With our novel design, we have innovated tissue molds called innovative molds (iMolds) that can be generated in any laboratory and are customized for any organ, tissue, or bone matter being imaged. Furthermore, the inexpensive and reusable tissue molds are made compatible for any microscope such as single and multi-photon confocal with varying stage dimensions. Excitingly, iMolds can also be generated to hold multiple organs in one mold, making reconstruction and imaging much easier. Taken together, with iMolds it is now possible to image cleared tissue in clearing medium while limiting movement and being able to relocate precise anatomical and cellular locations on sequential imaging events in any basic laboratory. This system provides great potential for screening widespread effects of therapeutics and disease across entire organ systems.
Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation
NASA Astrophysics Data System (ADS)
Sakamoto, M.; Honda, Y.; Kondo, A.
2016-06-01
From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.
2015-01-01
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169
Application of deep learning to the classification of images from colposcopy.
Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige
2018-03-01
The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images.
Application of deep learning to the classification of images from colposcopy
Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige
2018-01-01
The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images. PMID:29456725
Correlation between low level fluctuations in the x ray background and faint galaxies
NASA Technical Reports Server (NTRS)
Tolstoy, Eline; Griffiths, R. E.
1993-01-01
A correlation between low-level x-ray fluctuations in the cosmic x-ray background flux and the large numbers of galaxies found in deep optical imaging, to m(sub v) is less than or equal to 24 - 26, is desired. These (faint) galaxies by their morphology and color in deep multi-color CCD images and plate material were optically identified. Statistically significant correlations between these galaxies and low-level x-ray fluctuations at the same positions in multiple deep Einstein HRI observations in PAVO and in a ROSAT PSPC field were searched for. Our aim is to test the hypothesis that faint 'star burst' galaxies might contribute significantly to the cosmic x-ray background (at approximately 1 keV).
Wang, Shaowei; Xi, Wang; Cai, Fuhong; Zhao, Xinyuan; Xu, Zhengping; Qian, Jun; He, Sailing
2015-01-01
Gold nanoparticles can be used as contrast agents for bio-imaging applications. Here we studied multi-photon luminescence (MPL) of gold nanorods (GNRs), under the excitation of femtosecond (fs) lasers. GNRs functionalized with polyethylene glycol (PEG) molecules have high chemical and optical stability, and can be used as multi-photon luminescent nanoprobes for deep in vivo imaging of live animals. We have found that the depth of in vivo imaging is dependent upon the transmission and focal capability of the excitation light interacting with the GNRs. Our study focused on the comparison of MPL from GNRs with two different aspect ratios, as well as their ex vivo and in vivo imaging effects under 760 nm and 1000 nm excitation, respectively. Both of these wavelengths were located at an optically transparent window of biological tissue (700-1000 nm). PEGylated GNRs, which were intravenously injected into mice via the tail vein and accumulated in major organs and tumor tissue, showed high image contrast due to distinct three-photon luminescence (3PL) signals upon irradiation of a 1000 nm fs laser. Concerning in vivo mouse brain imaging, the 3PL imaging depth of GNRs under 1000 nm fs excitation could reach 600 μm, which was approximately 170 μm deeper than the two-photon luminescence (2PL) imaging depth of GNRs with a fs excitation of 760 nm. PMID:25553113
A survey on object detection in optical remote sensing images
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei
2016-07-01
Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey (1) template matching-based object detection methods, (2) knowledge-based object detection methods, (3) object-based image analysis (OBIA)-based object detection methods, (4) machine learning-based object detection methods, and (5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.
Appleton, P L; Quyn, A J; Swift, S; Näthke, I
2009-05-01
Visualizing overall tissue architecture in three dimensions is fundamental for validating and integrating biochemical, cell biological and visual data from less complex systems such as cultured cells. Here, we describe a method to generate high-resolution three-dimensional image data of intact mouse gut tissue. Regions of highest interest lie between 50 and 200 mum within this tissue. The quality and usefulness of three-dimensional image data of tissue with such depth is limited owing to problems associated with scattered light, photobleaching and spherical aberration. Furthermore, the highest-quality oil-immersion lenses are designed to work at a maximum distance of =10-15 mum into the sample, further compounding the ability to image at high-resolution deep within tissue. We show that manipulating the refractive index of the mounting media and decreasing sample opacity greatly improves image quality such that the limiting factor for a standard, inverted multi-photon microscope is determined by the working distance of the objective as opposed to detectable fluorescence. This method negates the need for mechanical sectioning of tissue and enables the routine generation of high-quality, quantitative image data that can significantly advance our understanding of tissue architecture and physiology.
Liu, Yu; Kang, Ning; Lv, Jing; Zhou, Zijian; Zhao, Qingliang; Ma, Lingceng; Chen, Zhong; Ren, Lei; Nie, Liming
2016-08-01
A gadolinium-doped multi-shell upconversion nanoparticle under 800 nm excitation is synthesized with a 10-fold fluorescence-intensity enhancement over that under 980 nm. The nanoformulations exhibit excellent photoacoustic/luminescence/magnetic resonance tri-modal imaging capabilities, enabling visualization of tumor morphology and microvessel distribution at a new imaging depth. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Landmark-based deep multi-instance learning for brain disease diagnosis.
Liu, Mingxia; Zhang, Jun; Adeli, Ehsan; Shen, Dinggang
2018-01-01
In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches. Copyright © 2017 Elsevier B.V. All rights reserved.
Prasoon, Adhish; Petersen, Kersten; Igel, Christian; Lauze, François; Dam, Erik; Nielsen, Mads
2013-01-01
Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.
Large deep neural networks for MS lesion segmentation
NASA Astrophysics Data System (ADS)
Prieto, Juan C.; Cavallari, Michele; Palotai, Miklos; Morales Pinzon, Alfredo; Egorova, Svetlana; Styner, Martin; Guttmann, Charles R. G.
2017-02-01
Multiple sclerosis (MS) is a multi-factorial autoimmune disorder, characterized by spatial and temporal dissemination of brain lesions that are visible in T2-weighted and Proton Density (PD) MRI. Assessment of lesion burden and is useful for monitoring the course of the disease, and assessing correlates of clinical outcomes. Although there are established semi-automated methods to measure lesion volume, most of them require human interaction and editing, which are time consuming and limits the ability to analyze large sets of data with high accuracy. The primary objective of this work is to improve existing segmentation algorithms and accelerate the time consuming operation of identifying and validating MS lesions. In this paper, a Deep Neural Network for MS Lesion Segmentation is implemented. The MS lesion samples are extracted from the Partners Comprehensive Longitudinal Investigation of Multiple Sclerosis (CLIMB) study. A set of 900 subjects with T2, PD and a manually corrected label map images were used to train a Deep Neural Network and identify MS lesions. Initial tests using this network achieved a 90% accuracy rate. A secondary goal was to enable this data repository for big data analysis by using this algorithm to segment the remaining cases available in the CLIMB repository.
Stochastic HKMDHE: A multi-objective contrast enhancement algorithm
NASA Astrophysics Data System (ADS)
Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2018-02-01
This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.
Compressed multi-block local binary pattern for object tracking
NASA Astrophysics Data System (ADS)
Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao
2018-04-01
Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.
A novel biomedical image indexing and retrieval system via deep preference learning.
Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou
2018-05-01
The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications. Copyright © 2018 Elsevier B.V. All rights reserved.
Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit
2015-08-01
In this work various approaches are investigated for X-ray image retrieval and specifically chest pathology retrieval. Given a query image taken from a data set of 443 images, the objective is to rank images according to similarity. Different features, including binary features, texture features, and deep learning (CNN) features are examined. In addition, two approaches are investigated for the retrieval task. One approach is based on the distance of image descriptors using the above features (hereon termed the "descriptor"-based approach); the second approach ("classification"-based approach) is based on a probability descriptor, generated by a pair-wise classification of each two classes (pathologies) and their decision values using an SVM classifier. Best results are achieved using deep learning features in a classification scheme.
LED induced autofluorescence (LIAF) imager with eight multi-filters for oral cancer diagnosis
NASA Astrophysics Data System (ADS)
Huang, Ting-Wei; Cheng, Nai-Lun; Tsai, Ming-Hsui; Chiou, Jin-Chern; Mang, Ou-Yang
2016-03-01
Oral cancer is one of the serious and growing problem in many developing and developed countries. The simple oral visual screening by clinician can reduce 37,000 oral cancer deaths annually worldwide. However, the conventional oral examination with the visual inspection and the palpation of oral lesions is not an objective and reliable approach for oral cancer diagnosis, and it may cause the delayed hospital treatment for the patients of oral cancer or leads to the oral cancer out of control in the late stage. Therefore, a device for oral cancer detection are developed for early diagnosis and treatment. A portable LED Induced autofluorescence (LIAF) imager is developed by our group. It contained the multiple wavelength of LED excitation light and the rotary filter ring of eight channels to capture ex-vivo oral tissue autofluorescence images. The advantages of LIAF imager compared to other devices for oral cancer diagnosis are that LIAF imager has a probe of L shape for fixing the object distance, protecting the effect of ambient light, and observing the blind spot in the deep port between the gumsgingiva and the lining of the mouth. Besides, the multiple excitation of LED light source can induce multiple autofluorescence, and LIAF imager with the rotary filter ring of eight channels can detect the spectral images of multiple narrow bands. The prototype of a portable LIAF imager is applied in the clinical trials for some cases in Taiwan, and the images of the clinical trial with the specific excitation show the significant differences between normal tissue and oral tissue under these cases.
The VIRMOS deep imaging survey. I. Overview, survey strategy, and CFH12K observations
NASA Astrophysics Data System (ADS)
Le Fèvre, O.; Mellier, Y.; McCracken, H. J.; Foucaud, S.; Gwyn, S.; Radovich, M.; Dantel-Fort, M.; Bertin, E.; Moreau, C.; Cuillandre, J.-C.; Pierre, M.; Le Brun, V.; Mazure, A.; Tresse, L.
2004-04-01
This paper describes the CFH12K-VIRMOS survey: a deep BVRI imaging survey in four fields totalling more than 17 deg2, conducted with the 40×30 arcmin2 field CFH-12K camera. The survey is intended to be a multi-purpose survey used for a variety of science goals, including surveys of very high redshift galaxies and weak lensing studies. Four high galactic latitude fields, each 2×2 deg2, have been selected along the celestial equator: 0226-04, 1003+01, 1400+05, and 2217+00. The 16 deg2 of the ``wide'' survey are covered with exposure times of 2 hr, 1.5 hr, 1 hr, 1 hr, respectively while the 1.3×1 deg2 area of the ``deep'' survey at the center of the 0226-04 field is covered with exposure times of 7 h, 4.5 h, 3 h, 3 h, in BVRI respectively. An additional area ˜2 deg2 has been imaged in the 0226-04 field corresponding to the area surveyed by the XMM-LSS program \\citep{pierre03}. The data is pipeline processed at the Terapix facility at the Institut d'Astrophysique de Paris to produce large mosaic images. The catalogs produced contain the positions, shapes, total and aperture magnitudes for 2.175 million objects measured in the four areas. The limiting magnitudes, measured as a 5σ measurement in a 3 arcsec diameter aperture is IAB=24.8 in the ``Wide'' areas, and IAB=25.3 in the deep area. Careful quality control has been applied on the data to ensure internal consistency and assess the photometric and astrometric accuracy as described in a joint paper \\citep{mccracken03}. These catalogs are used to select targets for the VIRMOS-VLT Deep Survey, a large spectroscopic survey of the distant universe (Le Fèvre et al. 2003). First results from the CFH12K-VIRMOS survey have been published on weak lensing (e.g. van Waerbeke & Mellier 2003). Catalogs and images are available through the VIRMOS database environment under Oracle (http://www.oamp.fr/cencos). They are open for general use since July 1st, 2003. Appendix A is only available in electronic form at http://www.edpsciences.org
Design of magnetic and fluorescent nanoparticles for in vivo MR and NIRF cancer imaging
NASA Astrophysics Data System (ADS)
Key, Jaehong
One big challenge for cancer treatment is that it has many errors in detection of cancers in the early stages before metastasis occurs. Using a current imaging modality, the detection of small tumors having potential metastasis is still very difficult. Thus, the development of multi-component nanoparticles (NPs) for dual modality cancer imaging is invaluable. The multi-component NPs can be an alternative to overcome the limitations from an imaging modality. For example, the multi-component NPs can visualize small tumors in both magnetic resonance imaging (MRI) and near infrared fluorescence (NIRF) imaging, which can help find the location of the tumors deep inside the body using MRI and subsequently guide surgeons to delineate the margin of tumors using highly sensitive NIRF imaging during a surgical operation. In this dissertation, we demonstrated the potential of the MRI and NIRF dual-modality NPs for skin and bladder cancer imaging. The multi-component NPs consisted of glycol chitosan, superparamagnetic iron oxide, NIRF dye, and cancer targeting peptides. We characterized the NPs and evaluated them with tumor bearing mice as well as various cancer cells. The findings of this research will contribute to the development of cancer diagnostic imaging and it can also be extensively applied to drug delivery system and fluorescence-guided surgical removal of cancer.
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
NASA Astrophysics Data System (ADS)
Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.
2018-05-01
In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.
Jiang, Shaowei; Liao, Jun; Bian, Zichao; Guo, Kaikai; Zhang, Yongbing; Zheng, Guoan
2018-04-01
A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.
The KMOS Cluster Survey - KCS: Timing the Formation of Passive Galaxies in Clusters at 1.4
NASA Astrophysics Data System (ADS)
Beifiori, Alessandra
2017-07-01
In this talk I will discuss recent progress studying the rest-frame optical properties of quiescent galaxies at this critical epoch using KMOS, the K-band Multi-Object Spectrograph on the ESO/VLT. I will highlight recent results form the KMOS Custer Survey (KCS), whose aim is to provide a census of quiescent galaxy kinematics at 1.4 ≤ z ≤ 1.8 in know overdensities. The combination of kinematic measurements from KMOS and structural parameters measured from deep HST imaging allowed us to place constraints on the formation ages of passive galaxies at 1.4
Tracing the Evolution of Passive Galaxies in Clusters at 1.4
NASA Astrophysics Data System (ADS)
Beifiori, Alessandra
2017-08-01
In this talk I will discuss recent progress studying the rest-frame optical properties of quiescent galaxies at this critical epoch using KMOS, the K-band Multi-Object Spectrograph on the ESO/VLT. I will highlight recent results form the KMOS Custer Survey (KCS), whose aim is to provide a census of quiescent galaxy kinematics at 1.4 ≤ z ≤ 1.8 in know overdensities. The combination of kinematic measurements from KMOS and structural parameters measured from deep HST imaging allowed us to place constraints on the formation ages of passive galaxies at 1.4
Ertosun, Mehmet Günhan; Rubin, Daniel L
2015-01-01
Brain glioma is the most common primary malignant brain tumors in adults with different pathologic subtypes: Lower Grade Glioma (LGG) Grade II, Lower Grade Glioma (LGG) Grade III, and Glioblastoma Multiforme (GBM) Grade IV. The survival and treatment options are highly dependent of this glioma grade. We propose a deep learning-based, modular classification pipeline for automated grading of gliomas using digital pathology images. Whole tissue digitized images of pathology slides obtained from The Cancer Genome Atlas (TCGA) were used to train our deep learning modules. Our modular pipeline provides diagnostic quality statistics, such as precision, sensitivity and specificity, of the individual deep learning modules, and (1) facilitates training given the limited data in this domain, (2) enables exploration of different deep learning structures for each module, (3) leads to developing less complex modules that are simpler to analyze, and (4) provides flexibility, permitting use of single modules within the framework or use of other modeling or machine learning applications, such as probabilistic graphical models or support vector machines. Our modular approach helps us meet the requirements of minimum accuracy levels that are demanded by the context of different decision points within a multi-class classification scheme. Convolutional Neural Networks are trained for each module for each sub-task with more than 90% classification accuracies on validation data set, and achieved classification accuracy of 96% for the task of GBM vs LGG classification, 71% for further identifying the grade of LGG into Grade II or Grade III on independent data set coming from new patients from the multi-institutional repository.
Ertosun, Mehmet Günhan; Rubin, Daniel L.
2015-01-01
Brain glioma is the most common primary malignant brain tumors in adults with different pathologic subtypes: Lower Grade Glioma (LGG) Grade II, Lower Grade Glioma (LGG) Grade III, and Glioblastoma Multiforme (GBM) Grade IV. The survival and treatment options are highly dependent of this glioma grade. We propose a deep learning-based, modular classification pipeline for automated grading of gliomas using digital pathology images. Whole tissue digitized images of pathology slides obtained from The Cancer Genome Atlas (TCGA) were used to train our deep learning modules. Our modular pipeline provides diagnostic quality statistics, such as precision, sensitivity and specificity, of the individual deep learning modules, and (1) facilitates training given the limited data in this domain, (2) enables exploration of different deep learning structures for each module, (3) leads to developing less complex modules that are simpler to analyze, and (4) provides flexibility, permitting use of single modules within the framework or use of other modeling or machine learning applications, such as probabilistic graphical models or support vector machines. Our modular approach helps us meet the requirements of minimum accuracy levels that are demanded by the context of different decision points within a multi-class classification scheme. Convolutional Neural Networks are trained for each module for each sub-task with more than 90% classification accuracies on validation data set, and achieved classification accuracy of 96% for the task of GBM vs LGG classification, 71% for further identifying the grade of LGG into Grade II or Grade III on independent data set coming from new patients from the multi-institutional repository. PMID:26958289
Endoscopic probe optics for spectrally encoded confocal microscopy.
Kang, Dongkyun; Carruth, Robert W; Kim, Minkyu; Schlachter, Simon C; Shishkov, Milen; Woods, Kevin; Tabatabaei, Nima; Wu, Tao; Tearney, Guillermo J
2013-01-01
Spectrally encoded confocal microscopy (SECM) is a form of reflectance confocal microscopy that can achieve high imaging speeds using relatively simple probe optics. Previously, the feasibility of conducting large-area SECM imaging of the esophagus in bench top setups has been demonstrated. Challenges remain, however, in translating SECM into a clinically-useable device; the tissue imaging performance should be improved, and the probe size needs to be significantly reduced so that it can fit into luminal organs of interest. In this paper, we report the development of new SECM endoscopic probe optics that addresses these challenges. A custom water-immersion aspheric singlet (NA = 0.5) was developed and used as the objective lens. The water-immersion condition was used to reduce the spherical aberrations and specular reflection from the tissue surface, which enables cellular imaging of the tissue deep below the surface. A custom collimation lens and a small-size grating were used along with the custom aspheric singlet to reduce the probe size. A dual-clad fiber was used to provide both the single- and multi- mode detection modes. The SECM probe optics was made to be 5.85 mm in diameter and 30 mm in length, which is small enough for safe and comfortable endoscopic imaging of the gastrointestinal tract. The lateral resolution was 1.8 and 2.3 µm for the single- and multi- mode detection modes, respectively, and the axial resolution 11 and 17 µm. SECM images of the swine esophageal tissue demonstrated the capability of this device to enable the visualization of characteristic cellular structural features, including basal cell nuclei and papillae, down to the imaging depth of 260 µm. These results suggest that the new SECM endoscopic probe optics will be useful for imaging large areas of the esophagus at the cellular scale in vivo.
3D printed optical phantoms and deep tissue imaging for in vivo applications including oral surgery
NASA Astrophysics Data System (ADS)
Bentz, Brian Z.; Costas, Alfonso; Gaind, Vaibhav; Garcia, Jose M.; Webb, Kevin J.
2017-03-01
Progress in developing optical imaging for biomedical applications requires customizable and often complex objects known as "phantoms" for testing, evaluation, and calibration. This work demonstrates that 3D printing is an ideal method for fabricating such objects, allowing intricate inhomogeneities to be placed at exact locations in complex or anatomically realistic geometries, a process that is difficult or impossible using molds. We show printed mouse phantoms we have fabricated for developing deep tissue fluorescence imaging methods, and measurements of both their optical and mechanical properties. Additionally, we present a printed phantom of the human mouth that we use to develop an artery localization method to assist in oral surgery.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Styner, Martin
2016-03-01
The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.
Segmentation of white rat sperm image
NASA Astrophysics Data System (ADS)
Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan
2011-11-01
The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.
3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor
Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo
2017-01-01
In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.
Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin
2016-05-01
Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.
Understanding Deep Representations Learned in Modeling Users Likes.
Guntuku, Sharath Chandra; Zhou, Joey Tianyi; Roy, Sujoy; Lin, Weisi; Tsang, Ivor W
2016-08-01
Automatically understanding and discriminating different users' liking for an image is a challenging problem. This is because the relationship between image features (even semantic ones extracted by existing tools, viz., faces, objects, and so on) and users' likes is non-linear, influenced by several subtle factors. This paper presents a deep bi-modal knowledge representation of images based on their visual content and associated tags (text). A mapping step between the different levels of visual and textual representations allows for the transfer of semantic knowledge between the two modalities. Feature selection is applied before learning deep representation to identify the important features for a user to like an image. The proposed representation is shown to be effective in discriminating users based on images they like and also in recommending images that a given user likes, outperforming the state-of-the-art feature representations by ∼ 15 %-20%. Beyond this test-set performance, an attempt is made to qualitatively understand the representations learned by the deep architecture used to model user likes.
Airplane detection in remote sensing images using convolutional neural networks
NASA Astrophysics Data System (ADS)
Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei
2018-03-01
Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Parham, Christopher A; Zhong, Zhong; Pisano, Etta; Connor, Jr., Dean M
2015-03-03
Systems and methods for detecting an image of an object using a multi-beam imaging system from an x-ray beam having a polychromatic energy distribution are disclosed. According to one aspect, a method can include generating a first X-ray beam having a polychromatic energy distribution. Further, the method can include positioning a plurality of monochromator crystals in a predetermined position to directly intercept the first X-ray beam such that a plurality of second X-ray beams having predetermined energy levels are produced. Further, an object can be positioned in the path of the second X-ray beams for transmission of the second X-ray beams through the object and emission from the object as transmitted X-ray beams. The transmitted X-ray beams can each be directed at an angle of incidence upon one or more crystal analyzers. Further, an image of the object can be detected from the beams diffracted from the analyzer crystals.
Coherent beam control through inhomogeneous media in multi-photon microscopy
NASA Astrophysics Data System (ADS)
Paudel, Hari Prasad
Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.
Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification
NASA Astrophysics Data System (ADS)
Gao, G.; Zhang, M.; Gu, Y.
2017-05-01
Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".
Deep Marginalized Sparse Denoising Auto-Encoder for Image Denoising
NASA Astrophysics Data System (ADS)
Ma, Hongqiang; Ma, Shiping; Xu, Yuelei; Zhu, Mingming
2018-01-01
Stacked Sparse Denoising Auto-Encoder (SSDA) has been successfully applied to image denoising. As a deep network, the SSDA network with powerful data feature learning ability is superior to the traditional image denoising algorithms. However, the algorithm has high computational complexity and slow convergence rate in the training. To address this limitation, we present a method of image denoising based on Deep Marginalized Sparse Denoising Auto-Encoder (DMSDA). The loss function of Sparse Denoising Auto-Encoder is marginalized so that it satisfies both sparseness and marginality. The experimental results show that the proposed algorithm can not only outperform SSDA in the convergence speed and training time, but also has better denoising performance than the current excellent denoising algorithms, including both the subjective and objective evaluation of image denoising.
The Image of the Negro in Deep South Public School State History Texts.
ERIC Educational Resources Information Center
McLaurin, Melton
This report reviews the image portrayed of the Negro, in textbooks used in the deep South. Slavery is painted as a cordial, humane system under kindly masters and the Negro as docile and childlike. Although the treatment of the modern era is relatively more objective, the texts, on the whole, evade treatment of the Civil Rights struggle, violence,…
NASA Astrophysics Data System (ADS)
Ren, B.; Wen, Q.; Zhou, H.; Guan, F.; Li, L.; Yu, H.; Wang, Z.
2018-04-01
The purpose of this paper is to provide decision support for the adjustment and optimization of crop planting structure in Jingxian County. The object-oriented information extraction method is used to extract corn and cotton from Jingxian County of Hengshui City in Hebei Province, based on multi-period GF-1 16-meter images. The best time of data extraction was screened by analyzing the spectral characteristics of corn and cotton at different growth stages based on multi-period GF-116-meter images, phenological data, and field survey data. The results showed that the total classification accuracy of corn and cotton was up to 95.7 %, the producer accuracy was 96 % and 94 % respectively, and the user precision was 95.05 % and 95.9 % respectively, which satisfied the demand of crop monitoring application. Therefore, combined with multi-period high-resolution images and object-oriented classification can be a good extraction of large-scale distribution of crop information for crop monitoring to provide convenient and effective technical means.
Instance annotation for multi-instance multi-label learning
F. Briggs; X.Z. Fern; R. Raich; Q. Lou
2013-01-01
Multi-instance multi-label learning (MIML) is a framework for supervised classification where the objects to be classified are bags of instances associated with multiple labels. For example, an image can be represented as a bag of segments and associated with a list of objects it contains. Prior work on MIML has focused on predicting label sets for previously unseen...
Leveraging Human Insights by Combining Multi-Objective Optimization with Interactive Evolution
2015-03-26
application, a program that used human selections to guide the evolution of insect -like images. He was able to demonstrate that humans provide key insights...LEVERAGING HUMAN INSIGHTS BY COMBINING MULTI-OBJECTIVE OPTIMIZATION WITH INTERACTIVE EVOLUTION THESIS Joshua R. Christman, Second Lieutenant, USAF...COMBINING MULTI-OBJECTIVE OPTIMIZATION WITH INTERACTIVE EVOLUTION THESIS Presented to the Faculty Department of Electrical and Computer Engineering
Near-UV Sources in the Hubble Ultra Deep Field: The Catalog
NASA Technical Reports Server (NTRS)
Gardner, Jonathan P.; Voyrer, Elysse; de Mello, Duilia F.; Siana, Brian; Quirk, Cori; Teplitz, Harry I.
2009-01-01
The catalog from the first high resolution U-band image of the Hubble Ultra Deep Field, taken with Hubble s Wide Field Planetary Camera 2 through the F300W filter, is presented. We detect 96 U-band objects and compare and combine this catalog with a Great Observatories Origins Deep Survey (GOODS) B-selected catalog that provides B, V, i, and z photometry, spectral types, and photometric redshifts. We have also obtained Far-Ultraviolet (FUV, 1614 Angstroms) data with Hubble s Advanced Camera for Surveys Solar Blind Channel (ACS/SBC) and with Galaxy Evolution Explorer (GALEX). We detected 31 sources with ACS/SBC, 28 with GALEX/FUV, and 45 with GALEX/NUV. The methods of observations, image processing, object identification, catalog preparation, and catalog matching are presented.
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
Brain Tumor Segmentation Using Deep Belief Networks and Pathological Knowledge.
Zhan, Tianming; Chen, Yi; Hong, Xunning; Lu, Zhenyu; Chen, Yunjie
2017-01-01
In this paper, we propose an automatic brain tumor segmentation method based on Deep Belief Networks (DBNs) and pathological knowledge. The proposed method is targeted against gliomas (both low and high grade) obtained in multi-sequence magnetic resonance images (MRIs). Firstly, a novel deep architecture is proposed to combine the multi-sequences intensities feature extraction with classification to get the classification probabilities of each voxel. Then, graph cut based optimization is executed on the classification probabilities to strengthen the spatial relationships of voxels. At last, pathological knowledge of gliomas is applied to remove some false positives. Our method was validated in the Brain Tumor Segmentation Challenge 2012 and 2013 databases (BRATS 2012, 2013). The performance of segmentation results demonstrates our proposal providing a competitive solution with stateof- the-art methods. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
Semiconductor Laser Multi-Spectral Sensing and Imaging
Le, Han Q.; Wang, Yang
2010-01-01
Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers. PMID:22315555
Semiconductor laser multi-spectral sensing and imaging.
Le, Han Q; Wang, Yang
2010-01-01
Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny
2018-02-01
We propose a cross-domain, multi-task transfer learning framework to transfer knowledge learned from non-medical images by a deep convolutional neural network (DCNN) to medical image recognition task while improving the generalization by multi-task learning of auxiliary tasks. A first stage cross-domain transfer learning was initiated from ImageNet trained DCNN to mammography trained DCNN. 19,632 regions-of-interest (ROI) from 2,454 mass lesions were collected from two imaging modalities: digitized-screen film mammography (SFM) and full-field digital mammography (DM), and split into training and test sets. In the multi-task transfer learning, the DCNN learned the mass classification task simultaneously from the training set of SFM and DM. The best transfer network for mammography was selected from three transfer networks with different number of convolutional layers frozen. The performance of single-task and multitask transfer learning on an independent SFM test set in terms of the area under the receiver operating characteristic curve (AUC) was 0.78+/-0.02 and 0.82+/-0.02, respectively. In the second stage cross-domain transfer learning, a set of 12,680 ROIs from 317 mass lesions on DBT were split into validation and independent test sets. We first studied the data requirements for the first stage mammography trained DCNN by varying the mammography training data from 1% to 100% and evaluated its learning on the DBT validation set in inference mode. We found that the entire available mammography set provided the best generalization. The DBT validation set was then used to train only the last four fully connected layers, resulting in an AUC of 0.90+/-0.04 on the independent DBT test set.
The Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey
NASA Astrophysics Data System (ADS)
Squires, Gordon K.; Lubin, L. M.; Gal, R. R.
2007-05-01
We present the motivation, design, and latest results from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey, a systematic search for structure on scales greater than 10 Mpc around 20 known galaxy clusters at z > 0.6. When complete, the survey will cover nearly 5 square degrees, all targeted at high-density regions, making it complementary and comparable to field surveys such as DEEP2, GOODS, and COSMOS. For the survey, we are using the Large Format Camera on the Palomar 5-m and SuPRIME-Cam on the Subaru 8-m to obtain optical/near-infrared imaging of an approximately 30 arcmin region around previously studied high-redshift clusters. Colors are used to identify likely member galaxies which are targeted for follow-up spectroscopy with the DEep Imaging Multi-Object Spectrograph on the Keck 10-m. This technique has been used to identify successfully the Cl 1604 supercluster at z = 0.9, a large scale structure containing at least eight clusters (Gal & Lubin 2004; Gal, Lubin & Squires 2005). We present the most recent structures to be photometrically and spectroscopically confirmed through this program, discuss the properties of the member galaxies as a function of environment, and describe our planned multi-wavelength (radio, mid-IR, and X-ray) observations of these systems. The goal of this survey is to identify and examine a statistical sample of large scale structures during an active period in the assembly history of the most massive clusters. With such a sample, we can begin to constrain large scale cluster dynamics and determine the effect of the larger environment on galaxy evolution.
Xia, Jun; Huang, Chao; Maslov, Konstantin; Anastasio, Mark A; Wang, Lihong V
2013-08-15
Photoacoustic computed tomography (PACT) is a hybrid technique that combines optical excitation and ultrasonic detection to provide high-resolution images in deep tissues. In the image reconstruction, a constant speed of sound (SOS) is normally assumed. This assumption, however, is often not strictly satisfied in deep tissue imaging, due to acoustic heterogeneities within the object and between the object and the coupling medium. If these heterogeneities are not accounted for, they will cause distortions and artifacts in the reconstructed images. In this Letter, we incorporated ultrasonic computed tomography (USCT), which measures the SOS distribution within the object, into our full-ring array PACT system. Without the need for ultrasonic transmitting electronics, USCT was performed using the same laser beam as for PACT measurement. By scanning the laser beam on the array surface, we can sequentially fire different elements. As a first demonstration of the system, we studied the effect of acoustic heterogeneities on photoacoustic vascular imaging. We verified that constant SOS is a reasonable approximation when the SOS variation is small. When the variation is large, distortion will be observed in the periphery of the object, especially in the tangential direction.
Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A
2017-03-01
Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans
2018-04-01
Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.
NASA Astrophysics Data System (ADS)
Jeong, Seungwon; Lee, Ye-Ryoung; Choi, Wonjun; Kang, Sungsam; Hong, Jin Hee; Park, Jin-Sung; Lim, Yong-Sik; Park, Hong-Gyu; Choi, Wonshik
2018-05-01
The efficient delivery of light energy is a prerequisite for the non-invasive imaging and stimulating of target objects embedded deep within a scattering medium. However, the injected waves experience random diffusion by multiple light scattering, and only a small fraction reaches the target object. Here, we present a method to counteract wave diffusion and to focus multiple-scattered waves at the deeply embedded target. To realize this, we experimentally inject light into the reflection eigenchannels of a specific flight time to preferably enhance the intensity of those multiple-scattered waves that have interacted with the target object. For targets that are too deep to be visible by optical imaging, we demonstrate a more than tenfold enhancement in light energy delivery in comparison with ordinary wave diffusion cases. This work will lay a foundation to enhance the working depth of imaging, sensing and light stimulation.
NASA Astrophysics Data System (ADS)
Joseph, R.; Courbin, F.; Starck, J.-L.
2016-05-01
We introduce a new algorithm for colour separation and deblending of multi-band astronomical images called MuSCADeT which is based on Morpho-spectral Component Analysis of multi-band images. The MuSCADeT algorithm takes advantage of the sparsity of astronomical objects in morphological dictionaries such as wavelets and their differences in spectral energy distribution (SED) across multi-band observations. This allows us to devise a model independent and automated approach to separate objects with different colours. We show with simulations that we are able to separate highly blended objects and that our algorithm is robust against SED variations of objects across the field of view. To confront our algorithm with real data, we use HST images of the strong lensing galaxy cluster MACS J1149+2223 and we show that MuSCADeT performs better than traditional profile-fitting techniques in deblending the foreground lensing galaxies from background lensed galaxies. Although the main driver for our work is the deblending of strong gravitational lenses, our method is fit to be used for any purpose related to deblending of objects in astronomical images. An example of such an application is the separation of the red and blue stellar populations of a spiral galaxy in the galaxy cluster Abell 2744. We provide a python package along with all simulations and routines used in this paper to contribute to reproducible research efforts. Codes can be found at http://lastro.epfl.ch/page-126973.html
Objected-oriented remote sensing image classification method based on geographic ontology model
NASA Astrophysics Data System (ADS)
Chu, Z.; Liu, Z. J.; Gu, H. Y.
2016-11-01
Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.
Habitable Exoplanet Imaging Mission (HabEx): Architecture of the 4m Mission Concept
NASA Astrophysics Data System (ADS)
Kuan, Gary M.; Warfield, Keith R.; Mennesson, Bertrand; Kiessling, Alina; Stahl, H. Philip; Martin, Stefan; Shaklan, Stuart B.; amini, rashied
2018-01-01
The Habitable Exoplanet Imaging Mission (HabEx) study is tasked by NASA to develop a scientifically compelling and technologically feasible exoplanet direct imaging mission concept, with extensive general astrophysics capabilities, for the 2020 Decadal Survey in Astrophysics. The baseline architecture of this space-based observatory concept encompasses an unobscured 4m diameter aperture telescope flying in formation with a 72-meter diameter starshade occulter. This large aperture, ultra-stable observatory concept extends and enhances upon the legacy of the Hubble Space Telescope by allowing us to probe even fainter objects and peer deeper into the Universe in the same ultraviolet, visible, and near infrared wavelengths, and gives us the capability, for the first time, to image and characterize potentially habitable, Earth-sized exoplanets orbiting nearby stars. Revolutionary direct imaging of exoplanets will be undertaken using a high-contrast coronagraph and a starshade imager. General astrophysics science will be undertaken with two world-class instruments – a wide-field workhorse camera for imaging and multi-object grism spectroscopy, and a multi-object, multi-resolution ultraviolet spectrograph. This poster outlines the baseline architecture of the HabEx flagship mission concept.
Fabric defect detection based on visual saliency using deep feature and low-rank recovery
NASA Astrophysics Data System (ADS)
Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan
2018-04-01
Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.
Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light
Wang, Ying Min; Judkewitz, Benjamin; DiMarzio, Charles A.; Yang, Changhuei
2012-01-01
Fluorescence imaging is one of the most important research tools in biomedical sciences. However, scattering of light severely impedes imaging of thick biological samples beyond the ballistic regime. Here we directly show focusing and high-resolution fluorescence imaging deep inside biological tissues by digitally time-reversing ultrasound-tagged light with high optical gain (~5×105). We confirm the presence of a time-reversed optical focus along with a diffuse background—a corollary of partial phase conjugation—and develop an approach for dynamic background cancellation. To illustrate the potential of our method, we image complex fluorescent objects and tumour microtissues at an unprecedented depth of 2.5 mm in biological tissues at a lateral resolution of 36 μm×52 μm and an axial resolution of 657 μm. Our results set the stage for a range of deep-tissue imaging applications in biomedical research and medical diagnostics. PMID:22735456
Robust multi-atlas label propagation by deep sparse representation
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2016-01-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods. PMID:27942077
Robust multi-atlas label propagation by deep sparse representation.
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2017-03-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer ( label-specific dictionaries ) consists of groups of representative atlas patches and the subsequent layers ( residual dictionaries ) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Infrared Faint Radio Sources in the Extended Chandra Deep Field South
NASA Astrophysics Data System (ADS)
Huynh, Minh T.
2009-01-01
Infrared-Faint Radio Sources (IFRSs) are a class of radio objects found in the Australia Telescope Large Area Survey (ATLAS) which have no observable counterpart in the Spitzer Wide-area Infrared Extragalactic Survey (SWIRE). The extended Chandra Deep Field South now has even deeper Spitzer imaging (3.6 to 70 micron) from a number of Legacy surveys. We report the detections of two IFRS sources in IRAC images. The non-detection of two other IFRSs allows us to constrain the source type. Detailed modeling of the SED of these objects shows that they are consistent with high redshift AGN (z > 2).
NASA Astrophysics Data System (ADS)
Nyland, Kristina; Lacy, Mark; Sajina, Anna; Pforr, Janine; Farrah, Duncan; Wilson, Gillian; Surace, Jason; Häußler, Boris; Vaccari, Mattia; Jarvis, Matt
2017-05-01
We apply The Tractor image modeling code to improve upon existing multi-band photometry for the Spitzer Extragalactic Representative Volume Survey (SERVS). SERVS consists of post-cryogenic Spitzer observations at 3.6 and 4.5 μm over five well-studied deep fields spanning 18 deg2. In concert with data from ground-based near-infrared (NIR) and optical surveys, SERVS aims to provide a census of the properties of massive galaxies out to z ≈ 5. To accomplish this, we are using The Tractor to perform “forced photometry.” This technique employs prior measurements of source positions and surface brightness profiles from a high-resolution fiducial band from the VISTA Deep Extragalactic Observations survey to model and fit the fluxes at lower-resolution bands. We discuss our implementation of The Tractor over a square-degree test region within the XMM Large Scale Structure field with deep imaging in 12 NIR/optical bands. Our new multi-band source catalogs offer a number of advantages over traditional position-matched catalogs, including (1) consistent source cross-identification between bands, (2) de-blending of sources that are clearly resolved in the fiducial band but blended in the lower resolution SERVS data, (3) a higher source detection fraction in each band, (4) a larger number of candidate galaxies in the redshift range 5 < z < 6, and (5) a statistically significant improvement in the photometric redshift accuracy as evidenced by the significant decrease in the fraction of outliers compared to spectroscopic redshifts. Thus, forced photometry using The Tractor offers a means of improving the accuracy of multi-band extragalactic surveys designed for galaxy evolution studies. We will extend our application of this technique to the full SERVS footprint in the future.
The light up and early evolution of high redshift Supermassive Black Holes
NASA Astrophysics Data System (ADS)
Comastri, Andrea; Brusa, Marcella; Aird, James; Lanzuisi, Giorgio
2016-07-01
The known AGN population at z > 6 is made by luminous optical QSO hosting Supermassive Black Holes (M > 10 ^{9}solar masses), likely to represent the tip of the iceberg of the luminosity and mass function. According to theoretical models for structure formation, Massive Black Holes (M _{BH} 10^{4-7} solar masses) are predicted to be abundant in the early Universe (z > 6). The majority of these lower luminosity objects are expected to be obscured and severely underepresented in current optical near-infrared surveys. The detection of such a population would provide unique constraints on the Massive Black Holes formation mechanism and subsequent growth and is within the capabilities of deep and large area ATHENA surveys. After a summary of the state of the art of present deep XMM and Chandra surveys, at z >3-6 also mentioning the expectations for the forthcoming eROSITA all sky survey; I will present the observational strategy of future multi-cone ATHENA Wide Field Imager (WFI) surveys and the expected breakthroughs in the determination of the luminosity function and its evolution at high (> 4) and very high (>6) redshifts.
VizieR Online Data Catalog: Palomar Transient Factory SNe IIn photometry (Ofek+, 2014)
NASA Astrophysics Data System (ADS)
Ofek, E. O.; Arcavi, I.; Tal, D.; Sullivan, M.; Gal-Yam, A.; Kulkarni, S. R.; Nugent, P. E.; Ben-Ami, S.; Bersier, D.; Cao, Y.; Cenko, S. B.; De Cia, A.; Filippenko, A. V.; Fransson, C.; Kasliwal, M. M.; Laher, R.; Surace, J.; Quimby, R.; Yaron, O.
2017-07-01
The Palomar Transient Factory (PTF; Law et al. 2009PASP..121.1395L; Rau et al. 2009PASP..121.1334R) and its extension the intermediate PTF (iPTF) found over 2200 spectroscopically confirmed SNe. We selected 19 SNe IIn for which PTF/iPTF has good coverage of the light-curve rise and peak; they are listed in Table 1. Optical spectra were obtained with a variety of telescopes and instruments, including the Double Spectrograph (Oke & Gunn 1982PASP...94..586O) at the Palomar 5 m Hale telescope, the Kast spectrograph (Miller & Stone 1993, Lick Observatory Technical Report 66 (Santa Cruz, CA: Lick Observatory)) at the Lick 3 m Shane telescope, the Low Resolution Imaging Spectrometer (Oke et al. 1995PASP..107..375O) on the Keck-1 10 m telescope, and the Deep Extragalactic Imaging Multi-Object Spectrograph (Faber et al. 2003SPIE.4841.1657F) on the Keck-2 10 m telescope. (2 data files).
Multi-Objective Optimization of Spacecraft Trajectories for Small-Body Coverage Missions
NASA Technical Reports Server (NTRS)
Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren
2017-01-01
Visual coverage of surface elements of a small-body object requires multiple images to be taken that meet many requirements on their viewing angles, illumination angles, times of day, and combinations thereof. Designing trajectories capable of maximizing total possible coverage may not be useful since the image target sequence and the feasibility of said sequence given the rotation-rate limitations of the spacecraft are not taken into account. This work presents a means of optimizing, in a multi-objective manner, surface target sequences that account for such limitations.
Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.
Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui
2017-01-01
Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.
NASA Astrophysics Data System (ADS)
Swain, Pradyumna; Mark, David
2004-09-01
The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.
Intervertebral disc detection in X-ray images using faster R-CNN.
Ruhan Sa; Owens, William; Wiegand, Raymond; Studin, Mark; Capoferri, Donald; Barooha, Kenneth; Greaux, Alexander; Rattray, Robert; Hutton, Adam; Cintineo, John; Chaudhary, Vipin
2017-07-01
Automatic identification of specific osseous landmarks on the spinal radiograph can be used to automate calculations for correcting ligament instability and injury, which affect 75% of patients injured in motor vehicle accidents. In this work, we propose to use deep learning based object detection method as the first step towards identifying landmark points in lateral lumbar X-ray images. The significant breakthrough of deep learning technology has made it a prevailing choice for perception based applications, however, the lack of large annotated training dataset has brought challenges to utilizing the technology in medical image processing field. In this work, we propose to fine tune a deep network, Faster-RCNN, a state-of-the-art deep detection network in natural image domain, using small annotated clinical datasets. In the experiment we show that, by using only 81 lateral lumbar X-Ray training images, one can achieve much better performance compared to traditional sliding window detection method on hand crafted features. Furthermore, we fine-tuned the network using 974 training images and tested on 108 images, which achieved average precision of 0.905 with average computation time of 3 second per image, which greatly outperformed traditional methods in terms of accuracy and efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, S.; Yan, F.; Li, J.
2011-01-01
Photoluminescence (PL) imaging is used to detect areas in multi-crystalline silicon that appear dark in band-to-band imaging due to high recombination. Steady-state PL intensity can be correlated to effective minority-carrier lifetime, and its temperature dependence can provide additional lifetime-limiting defect information. An area of high defect density has been laser cut from a multi-crystalline silicon solar cell. Both band-to-band and defect-band PL imaging have been collected as a function of temperature from {approx}85 to 350 K. Band-to-band luminescence is collected by an InGaAs camera using a 1200-nm short-pass filter, while defect band luminescence is collected using a 1350-nm long passmore » filter. The defect band luminescence is characterized by cathodoluminescence. Small pieces from adjacent areas within the same wafer are measured by deep-level transient spectroscopy (DLTS). DLTS detects a minority-carrier electron trap level with an activation energy of 0.45 eV on the sample that contained defects as seen by imaging.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, S.; Yan, F.; Li, J.
2011-07-01
Photoluminescence (PL) imaging is used to detect areas in multi-crystalline silicon that appear dark in band-to-band imaging due to high recombination. Steady-state PL intensity can be correlated to effective minority-carrier lifetime, and its temperature dependence can provide additional lifetime-limiting defect information. An area of high defect density has been laser cut from a multi-crystalline silicon solar cell. Both band-to-band and defect-band PL imaging have been collected as a function of temperature from ~85 to 350 K. Band-to-band luminescence is collected by an InGaAs camera using a 1200-nm short-pass filter, while defect band luminescence is collected using a 1350-nm long passmore » filter. The defect band luminescence is characterized by cathodo-luminescence. Small pieces from adjacent areas within the same wafer are measured by deep-level transient spectroscopy (DLTS). DLTS detects a minority-carrier electron trap level with an activation energy of 0.45 eV on the sample that contained defects as seen by imaging.« less
Exploring a Massive Starburst in the Epoch of Reionization
NASA Astrophysics Data System (ADS)
Marrone, Daniel; Aravena, M.; Chapman, S.; De Breuck, C.; Gonzalez, A.; Hezavehe, S.; Litke, K.; Ma, J.; Malkan, M.; Spilker, J.; Stalder, B.; Stark, D.; Strandet, M.; Tang, M.; Vieira, J.; Weiss, A.; Welikala, N.
2016-08-01
We request deep multi-band imaging of a unique dusty galaxy in the Epoch of Reionization (EoR), selected via its millimeter-wavelength dust emission in the 2500-square-degree South Pole Telescope survey. Spectroscopically confirmed to lie at z=6.900, this galaxy has a large dust mass and is likely one of the most rapidly star-forming objects in the EoR. Using Gemini-S, we have identified z-band emission from this object that could be UV continuum emission at z=6.9 or from a foreground lens. Interpretation of this object, and a complete understanding of its meaning for the census of star formation in the EoR, requires that we establish the presence or absence of gravitational lensing. The dust mass observed in this source is also unexpectedly large for its era, and measurements of the assembled stellar population, through the UV-continuum slope and restframe optical color, will help characterize the stellar mass and dust properties in this very early galaxy, the most spectacular galaxy yet discovered by the SPT.
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
Treder, Maximilian; Lauermann, Jost Lennart; Eter, Nicole
2018-02-01
Our purpose was to use deep learning for the automated detection of age-related macular degeneration (AMD) in spectral domain optical coherence tomography (SD-OCT). A total of 1112 cross-section SD-OCT images of patients with exudative AMD and a healthy control group were used for this study. In the first step, an open-source multi-layer deep convolutional neural network (DCNN), which was pretrained with 1.2 million images from ImageNet, was trained and validated with 1012 cross-section SD-OCT scans (AMD: 701; healthy: 311). During this procedure training accuracy, validation accuracy and cross-entropy were computed. The open-source deep learning framework TensorFlow™ (Google Inc., Mountain View, CA, USA) was used to accelerate the deep learning process. In the last step, a created DCNN classifier, using the information of the above mentioned deep learning process, was tested in detecting 100 untrained cross-section SD-OCT images (AMD: 50; healthy: 50). Therefore, an AMD testing score was computed: 0.98 or higher was presumed for AMD. After an iteration of 500 training steps, the training accuracy and validation accuracies were 100%, and the cross-entropy was 0.005. The average AMD scores were 0.997 ± 0.003 in the AMD testing group and 0.9203 ± 0.085 in the healthy comparison group. The difference between the two groups was highly significant (p < 0.001). With a deep learning-based approach using TensorFlow™, it is possible to detect AMD in SD-OCT with high sensitivity and specificity. With more image data, an expansion of this classifier for other macular diseases or further details in AMD is possible, suggesting an application for this model as a support in clinical decisions. Another possible future application would involve the individual prediction of the progress and success of therapy for different diseases by automatically detecting hidden image information.
Landcover Classification Using Deep Fully Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Wang, J.; Li, X.; Zhou, S.; Tang, J.
2017-12-01
Land cover classification has always been an essential application in remote sensing. Certain image features are needed for land cover classification whether it is based on pixel or object-based methods. Different from other machine learning methods, deep learning model not only extracts useful information from multiple bands/attributes, but also learns spatial characteristics. In recent years, deep learning methods have been developed rapidly and widely applied in image recognition, semantic understanding, and other application domains. However, there are limited studies applying deep learning methods in land cover classification. In this research, we used fully convolutional networks (FCN) as the deep learning model to classify land covers. The National Land Cover Database (NLCD) within the state of Kansas was used as training dataset and Landsat images were classified using the trained FCN model. We also applied an image segmentation method to improve the original results from the FCN model. In addition, the pros and cons between deep learning and several machine learning methods were compared and explored. Our research indicates: (1) FCN is an effective classification model with an overall accuracy of 75%; (2) image segmentation improves the classification results with better match of spatial patterns; (3) FCN has an excellent ability of learning which can attains higher accuracy and better spatial patterns compared with several machine learning methods.
Trullo, Roger; Petitjean, Caroline; Nie, Dong; Shen, Dinggang; Ruan, Su
2017-09-01
Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.
Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W
2016-11-01
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.
Van Valen, David A.; Kudo, Takamasa; Lane, Keara M.; ...
2016-11-04
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domainsmore » of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Valen, David A.; Kudo, Takamasa; Lane, Keara M.
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domainsmore » of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.« less
Van Valen, David A.; Lane, Keara M.; Quach, Nicolas T.; Maayan, Inbal
2016-01-01
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems. PMID:27814364
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
A TYPE Ia SUPERNOVA AT REDSHIFT 1.55 IN HUBBLE SPACE TELESCOPE INFRARED OBSERVATIONS FROM CANDELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodney, Steven A.; Riess, Adam G.; Jones, David O.
2012-02-10
We report the discovery of a Type Ia supernova (SN Ia) at redshift z = 1.55 with the infrared detector of the Wide Field Camera 3 (WFC3-IR) on the Hubble Space Telescope (HST). This object was discovered in CANDELS imaging data of the Hubble Ultra Deep Field and followed as part of the CANDELS+CLASH Supernova project, comprising the SN search components from those two HST multi-cycle treasury programs. This is the highest redshift SN Ia with direct spectroscopic evidence for classification. It is also the first SN Ia at z > 1 found and followed in the infrared, providing amore » full light curve in rest-frame optical bands. The classification and redshift are securely defined from a combination of multi-band and multi-epoch photometry of the SN, ground-based spectroscopy of the host galaxy, and WFC3-IR grism spectroscopy of both the SN and host. This object is the first of a projected sample at z > 1.5 that will be discovered by the CANDELS and CLASH programs. The full CANDELS+CLASH SN Ia sample will enable unique tests for evolutionary effects that could arise due to differences in SN Ia progenitor systems as a function of redshift. This high-z sample will also allow measurement of the SN Ia rate out to z Almost-Equal-To 2, providing a complementary constraint on SN Ia progenitor models.« less
The DEIMOS 10K Spectroscopic Survey Catalog of the COSMOS Field
NASA Astrophysics Data System (ADS)
Hasinger, G.; Capak, P.; Salvato, M.; Barger, A. J.; Cowie, L. L.; Faisst, A.; Hemmati, S.; Kakazu, Y.; Kartaltepe, J.; Masters, D.; Mobasher, B.; Nayyeri, H.; Sanders, D.; Scoville, N. Z.; Suh, H.; Steinhardt, C.; Yang, Fengwei
2018-05-01
We present a catalog of 10,718 objects in the COSMOS field, observed through multi-slit spectroscopy with the Deep Imaging Multi-Object Spectrograph (DEIMOS) on the Keck II telescope in the wavelength range ∼5500–9800 Å. The catalog contains 6617 objects with high-quality spectra (two or more spectral features), and 1798 objects with a single spectroscopic feature confirmed by the photometric redshift. For 2024 typically faint objects, we could not obtain reliable redshifts. The objects have been selected from a variety of input catalogs based on multi-wavelength observations in the field, and thus have a diverse selection function, which enables the study of the diversity in the galaxy population. The magnitude distribution of our objects is peaked at I AB ∼ 23 and K AB ∼ 21, with a secondary peak at K AB ∼ 24. We sample a broad redshift distribution in the range 0 < z < 6, with one peak at z ∼ 1, and another one around z ∼ 4. We have identified 13 redshift spikes at z > 0.65 with chance probabilities < 4 × 10‑4, some of which are clearly related to protocluster structures of sizes >10 Mpc. An object-to-object comparison with a multitude of other spectroscopic samples in the same field shows that our DEIMOS sample is among the best in terms of fraction of spectroscopic failures and relative redshift accuracy. We have determined the fraction of spectroscopic blends to about 0.8% in our sample. This is likely a lower limit and at any rate well below the most pessimistic expectations. Interestingly, we find evidence for strong lensing of Lyα background emitters within the slits of 12 of our target galaxies, increasing their apparent density by about a factor of 4. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
Skin condition measurement by using multispectral imaging system (Conference Presentation)
NASA Astrophysics Data System (ADS)
Jung, Geunho; Kim, Sungchul; Kim, Jae Gwan
2017-02-01
There are a number of commercially available low level light therapy (LLLT) devices in a market, and face whitening or wrinkle reduction is one of targets in LLLT. The facial improvement could be known simply by visual observation of face, but it cannot provide either quantitative data or recognize a subtle change. Clinical diagnostic instruments such as mexameter can provide a quantitative data, but it costs too high for home users. Therefore, we designed a low cost multi-spectral imaging device by adding additional LEDs (470nm, 640nm, white LED, 905nm) to a commercial USB microscope which has two LEDs (395nm, 940nm) as light sources. Among various LLLT skin treatments, we focused on getting melanin and wrinkle information. For melanin index measurements, multi-spectral images of nevus were acquired and melanin index values from color image (conventional method) and from multi-spectral images were compared. The results showed that multi-spectral analysis of melanin index can visualize nevus with a different depth and concentration. A cross section of wrinkle on skin resembles a wedge which can be a source of high frequency components when the skin image is Fourier transformed into a spatial frequency domain map. In that case, the entropy value of the spatial frequency map can represent the frequency distribution which is related with the amount and thickness of wrinkle. Entropy values from multi-spectral images can potentially separate the percentage of thin and shallow wrinkle from thick and deep wrinkle. From the results, we found that this low cost multi-spectral imaging system could be beneficial for home users of LLLT by providing the treatment efficacy in a quantitative way.
WINGS: A WIde-field Nearby Galaxy-cluster Survey. II. Deep optical photometry of 77 nearby clusters
NASA Astrophysics Data System (ADS)
Varela, J.; D'Onofrio, M.; Marmo, C.; Fasano, G.; Bettoni, D.; Cava, A.; Couch, W. J.; Dressler, A.; Kjærgaard, P.; Moles, M.; Pignatelli, E.; Poggianti, B. M.; Valentinuzzi, T.
2009-04-01
Context: This is the second paper of a series devoted to the WIde Field Nearby Galaxy-cluster Survey (WINGS). WINGS is a long term project which is gathering wide-field, multi-band imaging and spectroscopy of galaxies in a complete sample of 77 X-ray selected, nearby clusters (0.04 < z < 0.07) located far from the galactic plane (|b|≥ 20°). The main goal of this project is to establish a local reference for evolutionary studies of galaxies and galaxy clusters. Aims: This paper presents the optical (B,V) photometric catalogs of the WINGS sample and describes the procedures followed to construct them. We have paid special care to correctly treat the large extended galaxies (which includes the brightest cluster galaxies) and the reduction of the influence of the bright halos of very bright stars. Methods: We have constructed photometric catalogs based on wide-field images in B and V bands using SExtractor. Photometry has been performed on images in which large galaxies and halos of bright stars were removed after modeling them with elliptical isophotes. Results: We publish deep optical photometric catalogs (90% complete at V ~ 21.7, which translates to ˜ M^*_V+6 at mean redshift), giving positions, geometrical parameters, and several total and aperture magnitudes for all the objects detected. For each field we have produced three catalogs containing galaxies, stars and objects of “unknown” classification (~6%). From simulations we found that the uncertainty of our photometry is quite dependent of the light profile of the objects with stars having the most robust photometry and de Vaucouleurs profiles showing higher uncertainties and also an additional bias of ~-0.2^m. The star/galaxy classification of the bright objects (V < 20) was checked visually making negligible the fraction of misclassified objects. For fainter objects, we found that simulations do not provide reliable estimates of the possible misclassification and therefore we have compared our data with that from deep counts of galaxies and star counts from models of our Galaxy. Both sets turned out to be consistent with our data within ~5% (in the ratio galaxies/total) up to V ~ 24. Finally, we remark that the application of our special procedure to remove large halos improves the photometry of the large galaxies in our sample with respect to the use of blind automatic procedures and increases (~16%) the detection rate of objects projected onto them. Based on observations taken at the Issac Newton Telescope (2.5 m-INT) sited at Roque de los Muchachos (La Palma, Spain), and the MPG/ESO-2.2 m Telescope sited at La Silla (Chile). Appendices are only available in electronic form at http://www.aanda.org Catalog is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/497/667
TuMore: generation of synthetic brain tumor MRI data for deep learning based segmentation approaches
NASA Astrophysics Data System (ADS)
Lindner, Lydia; Pfarrkirchner, Birgit; Gsaxner, Christina; Schmalstieg, Dieter; Egger, Jan
2018-03-01
Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.
NASA Astrophysics Data System (ADS)
Griffiths, D.; Boehm, J.
2018-05-01
With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.
Constrained Deep Weak Supervision for Histopathology Image Segmentation.
Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan
2017-11-01
In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.
Brain tumor classification of microscopy images using deep residual learning
NASA Astrophysics Data System (ADS)
Ishikawa, Yota; Washiya, Kiyotada; Aoki, Kota; Nagahashi, Hiroshi
2016-12-01
The crisis rate of brain tumor is about one point four in ten thousands. In general, cytotechnologists take charge of cytologic diagnosis. However, the number of cytotechnologists who can diagnose brain tumors is not sufficient, because of the necessity of highly specialized skill. Computer-Aided Diagnosis by computational image analysis may dissolve the shortage of experts and support objective pathological examinations. Our purpose is to support a diagnosis from a microscopy image of brain cortex and to identify brain tumor by medical image processing. In this study, we analyze Astrocytes that is a type of glia cell of central nerve system. It is not easy for an expert to discriminate brain tumor correctly since the difference between astrocytes and low grade astrocytoma (tumors formed from Astrocyte) is very slight. In this study, we present a novel method to segment cell regions robustly using BING objectness estimation and to classify brain tumors using deep convolutional neural networks (CNNs) constructed by deep residual learning. BING is a fast object detection method and we use pretrained BING model to detect brain cells. After that, we apply a sequence of post-processing like Voronoi diagram, binarization, watershed transform to obtain fine segmentation. For classification using CNNs, a usual way of data argumentation is applied to brain cells database. Experimental results showed 98.5% accuracy of classification and 98.2% accuracy of segmentation.
NASA Astrophysics Data System (ADS)
Botter Martins, Samuel; Vallin Spina, Thiago; Yasuda, Clarissa; Falcão, Alexandre X.
2017-02-01
Statistical Atlases have played an important role towards automated medical image segmentation. However, a challenge has been to make the atlas more adaptable to possible errors in deformable registration of anomalous images, given that the body structures of interest for segmentation might present significant differences in shape and texture. Recently, deformable registration errors have been accounted by a method that locally translates the statistical atlas over the test image, after registration, and evaluates candidate objects from a delineation algorithm in order to choose the best one as final segmentation. In this paper, we improve its delineation algorithm and extend the model to be a multi-object statistical atlas, built from control images and adaptable to anomalous images, by incorporating a texture classifier. In order to provide a first proof of concept, we instantiate the new method for segmenting, object-by-object and all objects simultaneously, the left and right brain hemispheres, and the cerebellum, without the brainstem, and evaluate it on MRT1-images of epilepsy patients before and after brain surgery, which removed portions of the temporal lobe. The results show efficiency gain with statistically significant higher accuracy, using the mean Average Symmetric Surface Distance, with respect to the original approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodney, Steven A.; Riess, Adam G.; Jones, David O.
2015-11-15
We present two supernovae (SNe) discovered with the Hubble Space Telescope (HST) in the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey, an HST multi-cycle treasury program. We classify both objects as SNe Ia and find redshifts of z = 1.80 ± 0.02 and 2.26{sup +0.02}{sub −0.10}, the latter of which is the highest redshift SN Ia yet seen. Using light curve fitting we determine luminosity distances and find that both objects are consistent with a standard ΛCDM cosmological model. These SNe were observed using the HST Wide Field Camera 3 infrared detector, with imaging in both wide- and medium-band filters.more » We demonstrate that the classification and redshift estimates are significantly improved by the inclusion of single-epoch medium-band observations. This medium-band imaging approximates a very low resolution spectrum (λ/Δλ ≲ 100) which can isolate broad spectral absorption features that differentiate SNe Ia from their most common core collapse cousins. This medium-band method is also insensitive to dust extinction and (unlike grism spectroscopy) it is not affected by contamination from the SN host galaxy or other nearby sources. As such, it can provide a more efficient—though less precise—alternative to IR spectroscopy for high-z SNe.« less
ACS Imaging of beta Pic: Searching for the origin of rings and asymmetry in planetesimal disks
NASA Astrophysics Data System (ADS)
Kalas, Paul
2003-07-01
The emerging picture for planetesimal disks around main sequence stars is that their radial and azimuthal symmetries are significantly deformed by the dynamical effects of either planets interior to the disk, or stellar objects exterior to the disk. The cause of these structures, such as the 50 AU cutoff of our Kuiper Belt, remains mysterious. Structure in the beta Pic planetesimal disk could be due to dynamics controlled by an extrasolar planet, or by the tidal influence of a more massive object exterior to the disk. The hypothesis of an extrasolar planet causing the vertical deformation in the disk predicts a blue color to the disk perpendicular to the disk midplane. The hypothesis that a stellar perturber deforms the disk predicts a globally uniform color and the existence of ring-like structure beyond 800 AU radius. We propose to obtain deep, multi-color images of the beta Pic disk ansae in the region 15"-220" {200-4000 AU} radius with the ACS WFC. The unparalleled stability of the HST PSF means that these data are uniquely capable of delivering the color sensitivity that can distinguish between the two theories of beta Pic's disk structure. Ascertaining the cause of such structure provide a meaningful context for understanding the dynamical history of our early solar system, as well as other planetesimal systems imaged around main sequence stars.
Buried object remote detection technology for law enforcement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Grande, N.K.; Clark, G.A.; Durbin, P.F.
1991-03-01
We have developed a precise airborne temperature-sensing technology to detect buried objects for use by law enforcement. Demonstrations have imaged the sites of buried foundations, walls and trenches; mapped underground waterways and aquifers; and been used to locate underground military objects. Our patented methodology is incorporated in a commercially available, high signal-to-noise, dual-band infrared scanner with real-time, 12-bit digital image processing software and display. Our method creates color-coded images based on surface temperature variations of 0.2 {degrees}C. Unlike other less-sensitive methods, it maps true (corrected) temperatures by removing the (decoupled) surface emissivity mask equivalent to 1{degrees}C or 2{degrees}C; this maskmore » hinders interpretation of apparent (blackbody) temperatures. Once removed, were are able to identify surface temperature patterns from small diffusivity changes at buried object sites which heat and cool differently from their surroundings. Objects made of different materials and buried at different depths are identified by their unique spectra, spatial, thermal, temporal, emissivity and diffusivity signatures. We have successfully located the sites of buried (inert) simulated land mines 0.1 to 0.2 m deep; sod-covered rock pathways alongside dry ditches, deeper than 0.2 m; pavement covered burial trenches and cemetery structures as deep as 0.8 m; and aquifers more than 6 m and less 60 m deep. Our technology could be adapted for drug interdiction and pollution control. 16 refs., 14 figs.« less
High-resolution multi-band imaging for validation and characterization of small Kepler planets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Everett, Mark E.; Silva, David R.; Barclay, Thomas
2015-02-01
High-resolution ground-based optical speckle and near-infrared adaptive optics images are taken to search for stars in close angular proximity to host stars of candidate planets identified by the NASA Kepler Mission. Neighboring stars are a potential source of false positive signals. These stars also blend into Kepler light curves, affecting estimated planet properties, and are important for an understanding of planets in multiple star systems. Deep images with high angular resolution help to validate candidate planets by excluding potential background eclipsing binaries as the source of the transit signals. A study of 18 Kepler Object of Interest stars hosting amore » total of 28 candidate and validated planets is presented. Validation levels are determined for 18 planets against the likelihood of a false positive from a background eclipsing binary. Most of these are validated at the 99% level or higher, including five newly validated planets in two systems: Kepler-430 and Kepler-431. The stellar properties of the candidate host stars are determined by supplementing existing literature values with new spectroscopic characterizations. Close neighbors of seven of these stars are examined using multi-wavelength photometry to determine their nature and influence on the candidate planet properties. Most of the close neighbors appear to be gravitationally bound secondaries, while a few are best explained as closely co-aligned field stars. Revised planet properties are derived for each candidate and validated planet, including cases where the close neighbors are the potential host stars.« less
First-light instrument for the 3.6-m Devasthal Optical Telescope: 4Kx4K CCD Imager
NASA Astrophysics Data System (ADS)
Pandey, Shashi Bhushan; Yadav, Rama Kant Singh; Nanjappa, Nandish; Yadav, Shobhit; Reddy, Bheemireddy Krishna; Sahu, Sanjit; Srinivasan, Ramaiyengar
2018-04-01
As a part of in-house instrument developmental activity at ARIES, the 4Kx4K CCD Imager is designed and developed as a first-light instrument for the axial port of the 3.6-m Devasthal Optical Telescope (DOT). The f/9 beam of the telescope having a plate-scale of 6.4"/mm is utilized to conduct deeper photom-etry within the central 10' field of view. The pixel size of the blue-enhanced liquid nitrogen cooled STA4150 4Kx4K CCD chip is 15 μm, with options to select gain and speed values to utilize the dynamic range. Using the Imager, it is planned to image the central 6.5'x6.5' field of view of the telescope for various science goals by getting deeper images in several broad-band filters for point sources and objects with low surface brightness. The fully assembled Imager along with automated filter wheels having Bessel UBV RI and SDSS ugriz filters was tested in late 2015 at the axial port of the 3.6-m DOT. This instrument was finally mounted at the axial port of the 3.6-m DOT on 30 March 2016 when the telescope was technically activated jointly by the Prime Ministers of India and Belgium. It is expected to serve as a general purpose multi-band deep imaging instrument for a variety of science goals including studies of cosmic transients, active galaxies, star clusters and optical monitoring of X-ray sources discovered by the newly launched Indian space-mission called ASTROSAT, and follow-up of radio bright objects discovered by the Giant Meterwave Radio Telescope.
Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Yonggang; Thomas, Maikael A.
We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less
VizieR Online Data Catalog: Redshift survey of ALMA-identified SMGs in ECDFS (Danielson+, 2017)
NASA Astrophysics Data System (ADS)
Danielson, A. L. R.; Swinbank, A. M.; Smail, I.; Simpson, J. M.; Casey, C. M.; Chapman, S. C.; da Cunha, E.; Hodge, J. A.; Walter, F.; Wardlow, J. L.; Alexander, D. M.; Brandt, W. N.; De Breuck, C.; Coppin, K. E. K.; Dannerbauer, H.; Dickinson, M.; Edge, A. C.; Gawiser, E.; Ivison, R. J.; Karim, A.; Kovacs, A.; Lutz, D.; Menten, K.; Schinnerer, E.; Weiss, A.; van der Werf, P.
2017-11-01
The 870um LESS survey (Weiss+ 2009, J/ApJ/707/1201) was undertaken using the LABOCA camera on APEX, covering an area of 0.5°x0.5° centered on the ECDFS. Follow-up observations of the LESS sources were carried out with ALMA (Hodge+ 2013, J/ApJ/768/91). In summary, observations for each source were taken between 2011 October and November in the Cycle 0 Project #2011.1.00294.S. To search for spectroscopic redshifts, we initiated an observing campaign using the the FOcal Reducer and low dispersion Spectrograph (FORS2) and VIsible MultiObject Spectrograph (VIMOS) on VLT (program 183.A-0666), but to supplement these observations, we also obtained observations with XSHOOTER on VLT (program 090.A-0927(A) from 2012 December 7-10), the Gemini Near-Infrared Spectrograph (GNIRS; program GN-2012B-Q-90) and the Multi-Object Spectrometer for Infra-Red Exploration (MOSFIRE) on the Keck I telescope (2012B_H251M, 2013BU039M, and 2013BN114M), all of which cover the near-infrared. As part of a spectroscopic campaign targeting Herschel-selected galaxies in the ECDFS, ALESS submillimeter galaxies (SMGs) were included on DEep Imaging Multi-Object Spectrograph (DEIMOS) slit masks on Keck II (program 2012B_H251). In total, we observed 109 out of the 131 ALESS SMGs in the combined main and supp samples. Spectroscopic redshifts for two of our SMGs, ALESS61.1 and ALESS65.1, were determined from serendipitous detections of the [CII]λ158um line in the ALMA band. See section 2.7. (2 data files).
Pirpinia, Kleopatra; Bosman, Peter A N; Loo, Claudette E; Winter-Warnars, Gonneke; Janssen, Natasja N Y; Scholten, Astrid N; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2017-06-23
Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.
NASA Astrophysics Data System (ADS)
Pirpinia, Kleopatra; Bosman, Peter A. N.; E Loo, Claudette; Winter-Warnars, Gonneke; Y Janssen, Natasja N.; Scholten, Astrid N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2017-07-01
Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.
Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K
2016-07-20
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.
Towards deep learning with segregated dendrites
Guerguiev, Jordan; Lillicrap, Timothy P
2017-01-01
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons. PMID:29205151
Towards deep learning with segregated dendrites.
Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A
2017-12-05
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.
Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling
NASA Technical Reports Server (NTRS)
Brown, Matthew; Johnston, Mark D.
2013-01-01
Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.
Multi-sensor millimeter-wave system for hidden objects detection by non-collaborative screening
NASA Astrophysics Data System (ADS)
Zouaoui, Rhalem; Czarny, Romain; Diaz, Frédéric; Khy, Antoine; Lamarque, Thierry
2011-05-01
In this work, we present the development of a multi-sensor system for the detection of objects concealed under clothes using passive and active millimeter-wave (mmW) technologies. This study concerns both the optimization of a commercial passive mmW imager at 94 GHz using a phase mask and the development of an active mmW detector at 77 GHz based on synthetic aperture radar (SAR). A first wide-field inspection is done by the passive imager while the person is walking. If a suspicious area is detected, the active imager is switched-on and focused on this area in order to obtain more accurate data (shape of the object, nature of the material ...).
Inferring Interaction Force from Visual Information without Using Physical Force Sensors.
Hwang, Wonjun; Lim, Soo-Chul
2017-10-26
In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.
Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.
Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili
2015-12-15
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.
MOIRCS Deep Survey. I: DRG Number Counts
NASA Astrophysics Data System (ADS)
Kajisawa, Masaru; Konishi, Masahiro; Suzuki, Ryuji; Tokoku, Chihiro; Uchimoto, Yuka; Katsuno; Yoshikawa, Tomohiro; Akiyama, Masayuki; Ichikawa, Takashi; Ouchi, Masami; Omata, Koji; Tanaka, Ichi; Nishimura, Tetsuo; Yamada, Toru
2006-12-01
We use very deep near-infrared imaging data taken with Multi-Object InfraRed Camera and Spectrograph (MOIRCS) on the Subaru Telescope to investigate the number counts of Distant Red Galaxies (DRGs). We have observed a 4x7 arcmin^2 field in the Great Observatories Origins Deep Survey North (GOODS-N), and our data reach J=24.6 and K=23.2 (5sigma, Vega magnitude). The surface density of DRGs selected by J-K>2.3 is 2.35+-0.31 arcmin^-2 at K<22 and 3.54+-0.38 arcmin^-2 at K<23, respectively. These values are consistent with those in the GOODS-South and FIRES. Our deep and wide data suggest that the number counts of DRGs turn over at K~22, and the surface density of the faint DRGs with K>22 is smaller than that expected from the number counts at the brighter magnitude. The result indicates that while there are many bright galaxies at 2
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
Joint multi-object registration and segmentation of left and right cardiac ventricles in 4D cine MRI
NASA Astrophysics Data System (ADS)
Ehrhardt, Jan; Kepp, Timo; Schmidt-Richberg, Alexander; Handels, Heinz
2014-03-01
The diagnosis of cardiac function based on cine MRI requires the segmentation of cardiac structures in the images, but the problem of automatic cardiac segmentation is still open, due to the imaging characteristics of cardiac MR images and the anatomical variability of the heart. In this paper, we present a variational framework for joint segmentation and registration of multiple structures of the heart. To enable the simultaneous segmentation and registration of multiple objects, a shape prior term is introduced into a region competition approach for multi-object level set segmentation. The proposed algorithm is applied for simultaneous segmentation of the myocardium as well as the left and right ventricular blood pool in short axis cine MRI images. Two experiments are performed: first, intra-patient 4D segmentation with a given initial segmentation for one time-point in a 4D sequence, and second, a multi-atlas segmentation strategy is applied to unseen patient data. Evaluation of segmentation accuracy is done by overlap coefficients and surface distances. An evaluation based on clinical 4D cine MRI images of 25 patients shows the benefit of the combined approach compared to sole registration and sole segmentation.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
NASA Astrophysics Data System (ADS)
Camerlenghi, Angelo; Aoisi, Vanni; Lofi, Johanna; Hübscher, Christian; deLange, Gert; Flecker, Rachel; Garcia-Castellanos, Daniel; Gorini, Christian; Gvirtzman, Zohar; Krijgsman, Wout; Lugli, Stefano; Makowsky, Yizhaq; Manzi, Vinicio; McGenity, Terry; Panieri, Giuliana; Rabineau, Marina; Roveri, Marco; Sierro, Francisco Javier; Waldmann, Nicolas
2014-05-01
In May 2013, the DREAM MagellanPlus Workshop was held in Brisighella (Italy). The initiative builds from recent activities by various research groups to identify potential sites to perform deep-sea scientific drilling in the Mediterranean Sea across the deep Messinian Salinity Crisis (MSC) sedimentary record. In this workshop three generations of scientists were gathered: those who participated in formulation of the deep desiccated model, through DSDP Leg 13 drilling in 1973; those who are actively involved in present-day MSC research; and the next generation (PhD students and young post-docs). The purpose of the workshop was to identify locations for multiple-site drilling (including riser-drilling) in the Mediterranean Sea that would contribute to solve the several open questions still existing about the causes, processes, timing and consequences at local and planetary scale of an outstanding case of natural environmental change in the recent Earth history: the Messinian Salinity Crisis in the Mediterranean Sea. The product of the workshop is the identification of the structure of an experimental design of site characterization, riser-less and riser drilling, sampling, measurements, and down-hole analyses that will be the core for at least one compelling and feasible multiple phase drilling proposal. Particular focus has been given to reviewing seismic site survey data available from different research groups at pan-Mediterranean basin scale, to the assessment of additional site survey activity including 3D seismics, and to ways of establishing firm links with oil and gas industry. The scientific community behind the DREAM initiative is willing to proceed with the submission to IODP of a Multi-phase Drilling Project including several drilling proposals addressing specific drilling objectives, all linked to the driving objectives of the MSC drilling and understanding . A series of critical drilling targets were identified to address the still open questions related to the MSC event. Several proposal ideas also emerged to support the Multi-phase drilling project concept: Salt tectonics and fluids, Deep stratigraphic and crustal drilling in the Gulf of Lion (deriving from the GOLD drilling project), Deep stratigraphic and crustal drilling in the Ionian Sea, Deep Biosphere, Sapropels, and the Red Sea. A second MagellanPlus workshop held in January 2014 in Paris (France), has proceeded a step further towards the drafting of the Multi-phase Drilling Project and a set of pre-proposals for submission to IODP.
The frequency and properties of young tidal dwarf galaxies in nearby gas-rich groups
NASA Astrophysics Data System (ADS)
Lee-Waddell, K.; Spekkens, K.; Chandra, P.; Patra, N.; Cuillandre, J.-C.; Wang, J.; Haynes, M. P.; Cannon, J.; Stierwalt, S.; Sick, J.; Giovanelli, R.
2016-08-01
We present high-resolution Giant Metrewave Radio Telescope (GMRT) H I observations and deep Canada-France-Hawaii Telescope (CFHT) optical imaging of two galaxy groups: NGC 4725/47 and NGC 3166/9. These data are part of a multi-wavelength unbiased survey of the gas-rich dwarf galaxy populations in three nearby interacting galaxy groups. The NGC 4725/47 group hosts two tidal knots and one dwarf irregular galaxy (dIrr). Both tidal knots are located within a prominent H I tidal tail, appear to have sufficient mass (Mgas ≈ 108 M⊙) to evolve into long-lived tidal dwarf galaxies (TDGs) and are fairly young in age. The NGC 3166/9 group contains a TDG candidate, AGC 208457, at least three dIrrs and four H I knots. Deep CFHT imaging confirms that the optical component of AGC 208457 is bluer - with a 0.28 mag g - r colour - and a few Gyr younger than its purported parent galaxies. Combining the results for these groups with those from the NGC 871/6/7 group reported earlier, we find that the H I properties, estimated stellar ages and baryonic content of the gas-rich dwarfs clearly distinguish tidal features from their classical counterparts. We optimistically identify four potentially long-lived tidal objects associated with three separate pairs of interacting galaxies, implying that TDGs are not readily produced during interaction events as suggested by some recent simulations. The tidal objects examined in this survey also appear to have a wider variety of properties than TDGs of similar mass formed in current simulations of interacting galaxies, which could be the result of pre- or post-formation environmental influences.
Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging
Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.
2016-01-01
Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635
Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging.
Cua, Michelle; Wahl, Daniel J; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J; Jian, Yifan; Sarunic, Marinko V
2016-09-07
Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yuxin; Wen, Wenhui; Wang, Kai
2016-01-11
1700-nm window has been demonstrated to be a promising excitation window for deep-tissue multiphoton microscopy (MPM). Long working-distance water immersion objective lenses are typically used for deep-tissue imaging. However, absorption due to immersion water at 1700 nm is still high and leads to dramatic decrease in signals. In this paper, we demonstrate measurement of absorption spectrum of deuterium oxide (D{sub 2}O) from 1200 nm to 2600 nm, covering the three low water-absorption windows potentially applicable for deep-tissue imaging (1300 nm, 1700 nm, and 2200 nm). We apply this measured result to signal enhancement in MPM at the 1700-nm window. Compared with water immersion, D{sub 2}O immersionmore » enhances signal levels in second-harmonic generation imaging, 3-photon fluorescence imaging, and third-harmonic generation imaging by 8.1, 24.8, and 24.7 times with 1662-nm excitation, in good agreement with theoretical calculation based on our absorption measurement. This suggests D{sub 2}O a promising immersion medium for deep-tissue imaging.« less
NASA Astrophysics Data System (ADS)
Bhatia, Parmeet S.; Reda, Fitsum; Harder, Martin; Zhan, Yiqiang; Zhou, Xiang Sean
2017-02-01
Automatically detecting anatomy orientation is an important task in medical image analysis. Specifically, the ability to automatically detect coarse orientation of structures is useful to minimize the effort of fine/accurate orientation detection algorithms, to initialize non-rigid deformable registration algorithms or to align models to target structures in model-based segmentation algorithms. In this work, we present a deep convolution neural network (DCNN)-based method for fast and robust detection of the coarse structure orientation, i.e., the hemi-sphere where the principal axis of a structure lies. That is, our algorithm predicts whether the principal orientation of a structure is in the northern hemisphere or southern hemisphere, which we will refer to as UP and DOWN, respectively, in the remainder of this manuscript. The only assumption of our method is that the entire structure is located within the scan's field-of-view (FOV). To efficiently solve the problem in 3D space, we formulated it as a multi-planar 2D deep learning problem. In the training stage, a large number coronal-sagittal slice pairs are constructed as 2-channel images to train a DCNN to classify whether a scan is UP or DOWN. During testing, we randomly sample a small number of coronal-sagittal 2-channel images and pass them through our trained network. Finally, coarse structure orientation is determined using majority voting. We tested our method on 114 Elbow MR Scans. Experimental results suggest that only five 2-channel images are sufficient to achieve a high success rate of 97.39%. Our method is also extremely fast and takes approximately 50 milliseconds per 3D MR scan. Our method is insensitive to the location of the structure in the FOV.
VizieR Online Data Catalog: Bgri light curves of PTF11kmb and PTF12bho (Lunnan+, 2017)
NASA Astrophysics Data System (ADS)
Lunnan, R.; Kasliwal, M. M.; Cao, Y.; Hangard, L.; Yaron, O.; Parrent, J. T.; McCully, C.; Gal-Yam, A.; Mulchaey, J. S.; Ben-Ami, S.; Filippenko, A. V.; Fremling, C.; Fruchter, A. S.; Howell, D. A.; Koda, J.; Kupfer, T.; Kulkarni, S. R.; Laher, R.; Masci, F.; Nugent, P. E.; Ofek, E. O.; Yagi, M.; Yan, L.
2017-09-01
The objects PTF11kmb and PTF12bho were found as part of the Palomar Transient Factory (PTF). PTF11kmb was discovered in data taken with the 48 inch Samuel Oschin Telescope at Palomar Observatory (P48) on 2011 August 16.25 at a magnitude r=19.8mag. A spectrum was taken with the Low Resolution Imaging Spectrometer (LRIS) on the 10m Keck I telescope on 2011 August 28, showing SN features consistent with a SN Ib at a redshift z=0.017. The source PTF12bho was discovered in P48 data on 2012 February 25.25 at a magnitude of r=20.52mag. A spectrum taken with LRIS on 2012 March 15 yields z=0.023 based on the SN features. We obtained R- and g-band photometry of PTF11kmb and PTF12bho with the P48 CFH12K camera. Additional follow-up photometry was conducted with the automated 60-inch telescope at Palomar (P60) in the Bgri bands, and with the Las Cumbres Observatory (LCO) Faulkes Telescope North in gri. PTF12bho was also observed with the Swift Ultra-Violet/Optical Telescope (UVOT) and the Swift X-ray telescope (XRT) on 2012 March 17.8 for 3ks. We obtained a sequence of spectra for both PTF11kmb and PTF12bho using LRIS on Keck I, the DEep Imaging Multi-Object Spectrograph (DEIMOS) on the 10m Keck II telescope, and the Double Spectrograph (DPSP) on the 200-inch Hale telescope at Palomar Observatory (P200) spanning 2011 Aug 28.5 to 2014 Jul 2.5. We obtained deep imaging of the fields of PTF11kmb using WFC3/UVIS on the Hubble Space Telescope (HST) through program GO-13864 (PI Kasliwal) in 2015 Jul 12. This program also covered the field of SN 2005E (2014 Dec 10). (1 data file).
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1982-01-01
This catalog contains 1273 proven or probable Large Magellanic Cloud (LMC) members, as found on deep objective-prism plates taken with the Curtis Schmidt telescope at Cerro Tololo Inter-American Observatory in Chile. The stars are generally brighter than about photographic magnitude 14. Approximate spectral types were determined by examination of the 580 A/mm objective-prism spectra; approximate 1975 positions were obtained by measuring relative to the 1975 coordinate grids on the Uppsala-Mount Stromlo Atlas of the LMC (Gascoigne and Westerlund 1961), and approximate photographic magnitudes were determined by averaging image density measures from the plates and image-diameter measures on the 'B' charts. The machine-readable version of the LMC survey catalog is described to enable users to read and process the tape file without problems or guesswork.
Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.
Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng
2018-03-04
With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
New approaches in renal microscopy: volumetric imaging and superresolution microscopy.
Kim, Alfred H J; Suleiman, Hani; Shaw, Andrey S
2016-05-01
Histologic and electron microscopic analysis of the kidney has provided tremendous insight into structures such as the glomerulus and nephron. Recent advances in imaging, such as deep volumetric approaches and superresolution microscopy, have the capacity to dramatically enhance our current understanding of the structure and function of the kidney. Volumetric imaging can generate images millimeters below the surface of the intact kidney. Superresolution microscopy breaks the diffraction barrier inherent in traditional light microscopy, enabling the visualization of fine structures. Here, we describe new approaches to deep volumetric and superresolution microscopy of the kidney. Rapid advances in lasers, microscopic objectives, and tissue preparation have transformed our ability to deep volumetric image the kidney. Innovations in sample preparation have allowed for superresolution imaging with electron microscopy correlation, providing unprecedented insight into the structures within the glomerulus. Technological advances in imaging have revolutionized our capacity to image both large volumes of tissue and the finest structural details of a cell. These new advances have the potential to provide additional profound observations into the normal and pathologic functions of the kidney.
Study the effect of elevated dies temperature on aluminium and steel round deep drawing
NASA Astrophysics Data System (ADS)
Lean, Yeong Wei; Azuddin, M.
2016-02-01
Round deep drawing operation can only be realized by expensive multi-step production processes. To reduce the cost of processes while expecting an acceptable result, round deep drawing can be done at elevated temperature. There are 3 common problems which are fracture, wrinkling and earing of deep drawing a round cup. The main objective is to investigate the effect of dies temperature on aluminium and steel round deep drawing; with a sub-objective of eliminate fracture and reducing wrinkling effect. Experimental method is conducted with 3 different techniques on heating the die. The techniques are heating both upper and lower dies, heating only the upper dies, and heating only the lower dies. 4 different temperatures has been chosen throughout the experiment. The experimental result then will be compared with finite element analysis software. There is a positive result from steel material on heating both upper and lower dies, where the simulation result shows comparable as experimental result. Heating both upper and lower dies will be the best among 3 types of heating techniques.
Wirth, W; Eckstein, F; Boeth, H; Diederichs, G; Hudelmaier, M; Duda, G N
2014-10-01
Cartilage spin-spin magnetic resonance imaging (MRI) relaxation time (T2) represents a promising imaging biomarker of "early" osteoarthritis (OA) known to be associated with cartilage composition (collagen integrity, orientation, and hydration). However, no longitudinal imaging studies have been conducted to examine cartilage maturation in healthy subjects thus far. Therefore, we explore T2 change in the deep and superficial cartilage layers at the end of adolescence. Twenty adolescent and 20 mature volleyball athletes were studied (each 10 men and 10 women). Multi-echo spin-echo (MESE) images were acquired at baseline and 2-year follow-up. After segmentation, cartilage T2 was calculated in the deep and superficial cartilage layers of the medial tibial (MT) and the central, weight-bearing part of the medial femoral condyle (cMF), using five echoes (TE 19.4-58.2 ms). 16 adolescent (6 men, 10 women, baseline age 15.8 ± 0.5 years) and 17 mature (nine men, eight women, age 46.5 ± 5.2 years) athletes had complete baseline and follow-up images of sufficient quality to compute T2. In adolescents, a longitudinal decrease in T2 was observed in the deep layers of MT (-2.0 ms; 95% confidence interval (CI): [-3.4, -0.6] ms; P < 0.01) and cMF (-1.3 ms; [-2.4, -0.3] ms; P < 0.05), without obvious differences between males and females. No significant change was observed in the superficial layers, or in the deep or superficial layers of the mature athletes. In this first pilot study on quantitative imaging of cartilage maturation in healthy, athletic subjects, we find evidence of cartilage compositional change in deep cartilage layers of the medial femorotibial compartment in adolescents, most likely related to organizational changes in the collagen matrix. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
The Herschel Lensing Survey (HLS): HST Frontier Field Coverage
NASA Astrophysics Data System (ADS)
Egami, Eiichi
2015-08-01
The Herschel Lensing Survey (HLS; PI: Egami) is a large Far-IR/Submm imaging survey of massive galaxy clusters using the Herschel Space Observatory. Its main goal is to detect and study IR/Submm galaxies that are below the nominal confusion limit of Herschel by taking advantage of the strong gravitational lensing power of massive galaxy clusters. HLS has obtained deep PACS (100/160 um) and SPIRE (250/350/500 um) images for 54 cluster fields (HLS-deep) as well as shallower but nearly confusion-limited SPIRE-only images for 527 cluster fields (HLS-snapshot) with a total observing time of ~420 hours. Extensive multi-wavelength follow-up studies are currently on-going with a variety of observing facilities including ALMA.Here, I will focus on the analysis of the deep Herschel PACS/SPIRE images obtained for the 6 HST Frontier Fields (5 observed by HLS-deep; 1 observed by the Herschel GT programs). The Herschel/SPIRE maps are wide enough to cover the Frontier-Field parallel pointings, and we have detected a total of ~180 sources, some of which are strongly lensed. I will present the sample and discuss the properties of these Herschel-detected dusty star-forming galaxies (DSFGs) identified in the Frontier Fields. Although the majority of these Herschel sources are at moderate redshift (z<3), a small number of extremely high-redshift (z>6) candidates can be identified as "Herschel dropouts" when combined with longer-wavelength data. We have also identified ~40 sources as likely cluster members, which will allow us to study the properties of DSFGs in the dense cluster environment.A great legacy of our HLS project will be the extensive multi-wavelength database that incorporates most of the currently available data/information for the fields of the Frontier-Field, CLASH, and other HLS clusters (e.g., HST/Spitzer/Herschel images, spectroscopic/photometric redshifts, lensing models, best-fit SED models etc.). Provided with a user-friendly GUI and a flexible search engine, this database should serve as a powerful tool for a variety of projects including those with ALMA and JWST in the future. I will conclude by introducing this HLS database system.
Classify epithelium-stroma in histopathological images based on deep transferable network.
Yu, X; Zheng, H; Liu, C; Huang, Y; Ding, X
2018-04-20
Recently, the deep learning methods have received more attention in histopathological image analysis. However, the traditional deep learning methods assume that training data and test data have the same distributions, which causes certain limitations in real-world histopathological applications. However, it is costly to recollect a large amount of labeled histology data to train a new neural network for each specified image acquisition procedure even for similar tasks. In this paper, an unsupervised domain adaptation is introduced into a typical deep convolutional neural network (CNN) model to mitigate the repeating of the labels. The unsupervised domain adaptation is implemented by adding two regularisation terms, namely the feature-based adaptation and entropy minimisation, to the object function of a widely used CNN model called the AlexNet. Three independent public epithelium-stroma datasets were used to verify the proposed method. The experimental results have demonstrated that in the epithelium-stroma classification, the proposed method can achieve better performance than the commonly used deep learning methods and some existing deep domain adaptation methods. Therefore, the proposed method can be considered as a better option for the real-world applications of histopathological image analysis because there is no requirement for recollection of large-scale labeled data for every specified domain. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Content-based image exploitation for situational awareness
NASA Astrophysics Data System (ADS)
Gains, David
2008-04-01
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
Multi-focus beam shaping of high power multimode lasers
NASA Astrophysics Data System (ADS)
Laskin, Alexander; Volpp, Joerg; Laskin, Vadim; Ostrun, Aleksei
2017-08-01
Beam shaping of powerful multimode fiber lasers, fiber-coupled solid-state and diode lasers is of great importance for improvements of industrial laser applications. Welding, cladding with millimetre scale working spots benefit from "inverseGauss" intensity profiles; performance of thick metal sheet cutting, deep penetration welding can be enhanced when distributing the laser energy along the optical axis as more efficient usage of laser energy, higher edge quality and reduction of the heat affected zone can be achieved. Building of beam shaping optics for multimode lasers encounters physical limitations due to the low beam spatial coherence of multimode fiber-coupled lasers resulting in big Beam Parameter Products (BPP) or M² values. The laser radiation emerging from a multimode fiber presents a mixture of wavefronts. The fiber end can be considered as a light source which optical properties are intermediate between a Lambertian source and a single mode laser beam. Imaging of the fiber end, using a collimator and a focusing objective, is a robust and widely used beam delivery approach. Beam shaping solutions are suggested in form of optics combining fiber end imaging and geometrical separation of focused spots either perpendicular to or along the optical axis. Thus, energy of high power lasers is distributed among multiple foci. In order to provide reliable operation with multi-kW lasers and avoid damages the optics are designed as refractive elements with smooth optical surfaces. The paper presents descriptions of multi-focus optics as well as examples of intensity profile measurements of beam caustics and application results.
Monitoring controlled graves representing common burial scenarios with ground penetrating radar
NASA Astrophysics Data System (ADS)
Schultz, John J.; Martin, Michael M.
2012-08-01
Implementing controlled geophysical research is imperative to understand the variables affecting detection of clandestine graves during real-life forensic searches. This study focused on monitoring two empty control graves (shallow and deep) and six burials containing a small pig carcass (Sus scrofa) representing different burial forensic scenarios: a shallow buried naked carcass, a deep buried naked carcass, a deep buried carcass covered by a layer of rocks, a deep buried carcass covered by a layer of lime, a deep buried carcass wrapped in an impermeable tarpaulin and a deep buried carcass wrapped in a cotton blanket. Multi-frequency, ground penetrating radar (GPR) data were collected monthly over a 12-month monitoring period. The research site was a cleared field within a wooded area in a humid subtropical environment, and the soil consisted of a Spodosol, a common soil type in Florida. This study compared 2D GPR reflection profiles and horizontal time slices obtained with both 250 and 500 MHz dominant frequency antennae to determine the utility of both antennae for grave detection in this environment over time. Overall, a combination of both antennae frequencies provided optimal detection of the targets. Better images were noted for deep graves, compared to shallow graves. The 250 MHz antenna provided better images for detecting deep graves, as less non-target anomalies were produced with lower radar frequencies. The 250 MHz antenna also provided better images detecting the disturbed ground. Conversely, the 500 MHz antenna provided better images when detecting the shallow pig grave. The graves that contained a pig carcass with associated grave items provided the best results, particularly the carcass covered with rocks and the carcass wrapped in a tarpaulin. Finally, during periods of increased soil moisture levels, there was increased detection of graves that was most likely related to conductive decompositional fluid from the carcasses.
End-to-end deep neural network for optical inversion in quantitative photoacoustic imaging.
Cai, Chuangjian; Deng, Kexin; Ma, Cheng; Luo, Jianwen
2018-06-15
An end-to-end deep neural network, ResU-net, is developed for quantitative photoacoustic imaging. A residual learning framework is used to facilitate optimization and to gain better accuracy from considerably increased network depth. The contracting and expanding paths enable ResU-net to extract comprehensive context information from multispectral initial pressure images and, subsequently, to infer a quantitative image of chromophore concentration or oxygen saturation (sO 2 ). According to our numerical experiments, the estimations of sO 2 and indocyanine green concentration are accurate and robust against variations in both optical property and object geometry. An extremely short reconstruction time of 22 ms is achieved.
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Active illuminated space object imaging and tracking simulation
NASA Astrophysics Data System (ADS)
Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu
2016-10-01
Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.
SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Iyengar, P
2016-06-15
Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less
Liu, Fei; Zhang, Xi; Jia, Yan
2015-01-01
In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.
Multi-views Fusion CNN for Left Ventricular Volumes Estimation on Cardiac MR Images.
Luo, Gongning; Dong, Suyu; Wang, Kuanquan; Zuo, Wangmeng; Cao, Shaodong; Zhang, Henggui
2017-10-13
Left ventricular (LV) volumes estimation is a critical procedure for cardiac disease diagnosis. The objective of this paper is to address direct LV volumes prediction task. In this paper, we propose a direct volumes prediction method based on the end-to-end deep convolutional neural networks (CNN). We study the end-to-end LV volumes prediction method in items of the data preprocessing, networks structure, and multi-views fusion strategy. The main contributions of this paper are the following aspects. First, we propose a new data preprocessing method on cardiac magnetic resonance (CMR). Second, we propose a new networks structure for end-to-end LV volumes estimation. Third, we explore the representational capacity of different slices, and propose a fusion strategy to improve the prediction accuracy. The evaluation results show that the proposed method outperforms other state-of-the-art LV volumes estimation methods on the open accessible benchmark datasets. The clinical indexes derived from the predicted volumes agree well with the ground truth (EDV: R=0.974, RMSE=9.6ml; ESV: R=0.976, RMSE=7.1ml; EF: R=0.828, RMSE =4.71%). Experimental results prove that the proposed method has high accuracy and efficiency on LV volumes prediction task. The proposed method not only has application potential for cardiac diseases screening for large-scale CMR data, but also can be extended to other medical image research fields.
Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.
Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua
2017-06-01
Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.
Multi-axial interferometry: demonstration of deep nulling
NASA Astrophysics Data System (ADS)
Buisset, Christophe; Rejeaunier, Xavier; Rabbia, Yves; Ruilier, Cyril; Barillot, Marc; Lierstuen, Lars; Perdigués Armengol, Josep Maria
2017-11-01
The ESA-Darwin mission is devoted to direct detection and spectroscopic characterization of earthlike exoplanets. Starlight rejection is achieved by nulling interferometry from space so as to make detectable the faintly emitting planet in the neighborhood. In that context, Alcatel Alenia Space has developed a nulling breadboard for ESA in order to demonstrate in laboratory conditions the rejection of an on-axis source. This device, the Multi Aperture Imaging Interferometer (MAII) demonstrated high rejection capability at a relevant level for exoplanets, in singlepolarized and mono-chromatic conditions. In this paper we report on the new multi-axial configuration of MAII and we summarize our late nulling results.
Hierarchical layered and semantic-based image segmentation using ergodicity map
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing
2010-04-01
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors
Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.
2016-01-01
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643
NASA Astrophysics Data System (ADS)
Chiarelli, Antonio Maria; Croce, Pierpaolo; Merla, Arcangelo; Zappasodi, Filippo
2018-06-01
Objective. Brain–computer interface (BCI) refers to procedures that link the central nervous system to a device. BCI was historically performed using electroencephalography (EEG). In the last years, encouraging results were obtained by combining EEG with other neuroimaging technologies, such as functional near infrared spectroscopy (fNIRS). A crucial step of BCI is brain state classification from recorded signal features. Deep artificial neural networks (DNNs) recently reached unprecedented complex classification outcomes. These performances were achieved through increased computational power, efficient learning algorithms, valuable activation functions, and restricted or back-fed neurons connections. By expecting significant overall BCI performances, we investigated the capabilities of combining EEG and fNIRS recordings with state-of-the-art deep learning procedures. Approach. We performed a guided left and right hand motor imagery task on 15 subjects with a fixed classification response time of 1 s and overall experiment length of 10 min. Left versus right classification accuracy of a DNN in the multi-modal recording modality was estimated and it was compared to standalone EEG and fNIRS and other classifiers. Main results. At a group level we obtained significant increase in performance when considering multi-modal recordings and DNN classifier with synergistic effect. Significance. BCI performances can be significantly improved by employing multi-modal recordings that provide electrical and hemodynamic brain activity information, in combination with advanced non-linear deep learning classification procedures.
Understanding Seismic Anisotropy in Hunt Well of Fort McMurray, Canada
NASA Astrophysics Data System (ADS)
Malehmir, R.; Schmitt, D. R.; Chan, J.
2014-12-01
Seismic imaging plays vital role in geothermal systems as a sustainable energy resource. In this paper, we acquired and processed zero-offset and walk-away VSP and logging as well as surface seismic in Athabasca oil sand area, Alberta. Seismic data were highly processed to make better image geothermal system. Through data processing, properties of natural fractures such as orientation and width were studied and high probable permeable zones were mapped along the deep drilled to the depth of 2363m deep into crystalline basement rocks. In addition to logging data, seismic data were processed to build a reliable image of underground. Velocity analysis in high resolution multi-component walk-away VSP informed us about the elastic anisotropy in place. Study of the natural and induced fracture as well as elastic anisotropy in the seismic data, led us to better map stress regime around the well bore. The seismic image and map of fractures optimizes enhanced geothermal stages through hydraulic stimulation. Keywords: geothermal, anisotropy, VSP, logging, Hunt well, seismic
Deep convolutional neural networks for classifying GPR B-scans
NASA Astrophysics Data System (ADS)
Besaw, Lance E.; Stimac, Philip J.
2015-05-01
Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.
Stereo Imaging Miniature Endoscope with Single Imaging Chip and Conjugated Multi-Bandpass Filters
NASA Technical Reports Server (NTRS)
Shahinian, Hrayr Karnig (Inventor); Bae, Youngsam (Inventor); White, Victor E. (Inventor); Shcheglov, Kirill V. (Inventor); Manohara, Harish M. (Inventor); Kowalczyk, Robert S. (Inventor)
2018-01-01
A dual objective endoscope for insertion into a cavity of a body for providing a stereoscopic image of a region of interest inside of the body including an imaging device at the distal end for obtaining optical images of the region of interest (ROI), and processing the optical images for forming video signals for wired and/or wireless transmission and display of 3D images on a rendering device. The imaging device includes a focal plane detector array (FPA) for obtaining the optical images of the ROI, and processing circuits behind the FPA. The processing circuits convert the optical images into the video signals. The imaging device includes right and left pupil for receiving a right and left images through a right and left conjugated multi-band pass filters. Illuminators illuminate the ROI through a multi-band pass filter having three right and three left pass bands that are matched to the right and left conjugated multi-band pass filters. A full color image is collected after three or six sequential illuminations with the red, green and blue lights.
Volz, Steffen; Hattingen, Elke; Preibisch, Christine; Gasser, Thomas; Deichmann, Ralf
2009-05-01
T2-weighted gradient echo (GE) images yield good contrast of iron-rich structures like the subthalamic nuclei due to microscopic susceptibility induced field gradients, providing landmarks for the exact placement of deep brain stimulation electrodes in Parkinson's disease treatment. An additional advantage is the low radio frequency (RF) exposure of GE sequences. However, T2-weighted images are also sensitive to macroscopic field inhomogeneities, resulting in signal losses, in particular in orbitofrontal and temporal brain areas, limiting anatomical information from these areas. In this work, an image correction method for multi-echo GE data based on evaluation of phase information for field gradient mapping is presented and tested in vivo on a 3 Tesla whole body MR scanner. In a first step, theoretical signal losses are calculated from the gradient maps and a pixelwise image intensity correction is performed. In a second step, intensity corrected images acquired at different echo times TE are combined using optimized weighting factors: in areas not affected by macroscopic field inhomogeneities, data acquired at long TE are weighted more strongly to achieve the contrast required. For large field gradients, data acquired at short TE are favored to avoid signal losses. When compared to the original data sets acquired at different TE and the respective intensity corrected data sets, the resulting combined data sets feature reduced signal losses in areas with major field gradients, while intensity profiles and a contrast-to-noise (CNR) analysis between subthalamic nucleus, red nucleus and the surrounding white matter demonstrate good contrast in deep brain areas.
The Chandra Deepest Fields in the Infrared: Making the Connection between Normal Galaxies and AGN
NASA Astrophysics Data System (ADS)
Grogin, N. A.; Ferguson, H. C.; Dickinson, M. E.; Giavalisco, M.; Mobasher, B.; Padovani, P.; Williams, R. E.; Chary, R.; Gilli, R.; Heckman, T. M.; Stern, D.; Winge, C.
2001-12-01
Within each of the two Chandra Deepest Fields (CDFs), there are ~10'x15' regions targeted for non-proprietary, deep SIRTF 3.6--24μ m imaging as part of the Great Observatories Origins Deep Survey (GOODS) Legacy program. In advance of the SIRTF observations, the GOODS team has recently begun obtaining non-proprietary, deep ground-based optical and near-IR imaging and spectroscopy over these regions, which contain virtually all of the current ≈1 Msec CXO coverage in the CDF North and much of the ≈1 Msec coverage in the CDF South. In particular, the planned depth of the near-IR imaging (JAB ~ 25.3; HAB ~ 24.8; KAB ~ 24.4) combined with the deep Chandra data can allow us to trace the evolutionary connection between normal galaxies, starbursts, and AGN out to z ~ 1 and beyond. We describe our CDF Archival program, which is integrating these GOODS-supporting observations together with the CDF archival data and other publicly-available datasets in these regions to create a multi-wavelength deep imaging and spectroscpic database available to the entire community. We highlight progress toward near-term science goals of this program, including: (a) pushing constraints on the redshift distribution and spectral-energy distributions of the faintest X-ray sources to the deepest possible levels via photometric redshifts; and (b) better characterizing the heavily-obscured and the high-redshift populations via both a near-IR search for optically-undetected CDF X-ray sources and also X-ray stacking analyses on the CXO-undetected EROs in these fields.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2004-09-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27'x 27') UB/VRI optimized mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6\\arcmin\\ field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4'x 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 x 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench beam combiner with visible and near-infrared imagers utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC/NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
Photoacoustic tomography of foreign bodies in soft biological tissue.
Cai, Xin; Kim, Chulhong; Pramanik, Manojit; Wang, Lihong V
2011-04-01
In detecting small foreign bodies in soft biological tissue, ultrasound imaging suffers from poor sensitivity (52.6%) and specificity (47.2%). Hence, alternative imaging methods are needed. Photoacoustic (PA) imaging takes advantage of strong optical absorption contrast and high ultrasonic resolution. A PA imaging system is employed to detect foreign bodies in biological tissues. To achieve deep penetration, we use near-infrared light ranging from 750 to 800 nm and a 5-MHz spherically focused ultrasonic transducer. PA images were obtained from various targets including glass, wood, cloth, plastic, and metal embedded more than 1 cm deep in chicken tissue. The locations and sizes of the targets from the PA images agreed well with those of the actual samples. Spectroscopic PA imaging was also performed on the objects. These results suggest that PA imaging can potentially be a useful intraoperative imaging tool to identify foreign bodies.
Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik
2016-11-11
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik
2016-01-01
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717
The Palomar Transient Factory: Introduction and Data Release
NASA Astrophysics Data System (ADS)
Surace, Jason Anthony
2015-08-01
The Palomar Transient Factory (PTF) is a synoptic sky survey in operation since 2009. PTF utilizes a 7.1 square degree camera on the Palomar 48-inch Schmidt telescope to survey the sky primarily at a single wavelength (R-band) at a rate of 1000-3000 square degrees a night, to a depth of roughly 20.5. The data are used to detect and study transient and moving objects such as gamma ray bursts, supernovae and asteroids, as well as variable phenomena such as quasars and Galactic stars. The data processing system handles realtime processing and detection of transients, solar system object processing, high photometric precision processing and light curve generation, and long-term archiving and curation. Although a significant scientific installation in of itself, PTF also serves as the prototype for our next generation project, the Zwicky Transient Facility (ZTF). Beginning operations in 2017, ZTF will feature a 50 square degree camera which will enable scanning of the entire northern visible sky every night. ZTF in turn will serve as a stepping stone to the Large Synoptic Survey Telescope (LSST).We announce the availability of the second PTF public data release, which includes epochal images and catalogs, as well as deep (coadded) reference images and associated catalogs, for the majority of the northern sky. The epochal data span the time period from 2009 through 2012, with various cadences and coverages, typically in the tens or hundreds for most points on the sky. The data are available through both a GUI and software API portal at the Infrared Processing and Analysis Center at Caltech. The PTF and current iPTF projects are multi-partner multi-national collaborations.
Deep Imaging of Extremely Metal-Poor Galaxies
NASA Astrophysics Data System (ADS)
Corbin, Michael
2006-07-01
Conflicting evidence exists regarding whether the most metal-poor and actively star-forming galaxies in the local universe such as I Zw 18 contain evolved stars. We propose to help settle this issue by obtaining deep ACS/HRC U, narrow-V, I, and H-alpha images of nine nearby {z < 0.01} extremely metal-poor {12 + O/H < 7.65} galaxies selected from the Sloan Digital Sky Survey. These objects are only marginally resolved from the ground and appear uniformly blue, strongly motivating HST imaging. The continuum images will establish: 1.} If underlying populations of evolved stars are present, by revealing the objects' colors on scales 10 pc, and 2.} The presence of any faint tidal features, dust lanes, and globular or super star clusters, all of which constrain the objects' evolutionary states. The H-alpha images, in combination with ground-based echelle spectroscopy, will reveal 1.} Whether the objects are producing "superwinds" that are depleting them of their metals; ground-based images of some of them indeed show large halos of ionized gas, and 2.} The correspondence of their nebular and stellar emission on scales of a few parsecs, which is important for understanding the "feedback" process by which supernovae and stellar winds regulate star formation. One of the sample objects, CGCG 269-049, lies only 2 Mpc away, allowing the detection of individual red giant stars in it if any are present. We have recently obtained Spitzer images and spectra of this galaxy to determine its dust content and star formation history, which will complement the proposed HST observations. [NOTE: THIS PROPOSAL WAS REDUCED TO FIVE ORBITS, AND ONLY ONE OF THE ORIGINAL TARGETS, CGCG 269-049, AFTER THE PHASE I REVIEW
The Top 10 List of Gravitational Lens Candidates from the HUBBLE SPACE TELESCOPE Medium Deep Survey
NASA Astrophysics Data System (ADS)
Ratnatunga, Kavan U.; Griffiths, Richard E.; Ostrander, Eric J.
1999-05-01
A total of 10 good candidates for gravitational lensing have been discovered in the WFPC2 images from the Hubble Space Telescope (HST) Medium Deep Survey (MDS) and archival primary observations. These candidate lenses are unique HST discoveries, i.e., they are faint systems with subarcsecond separations between the lensing objects and the lensed source images. Most of them are difficult objects for ground-based spectroscopic confirmation or for measurement of the lens and source redshifts. Seven are ``strong lens'' candidates that appear to have multiple images of the source. Three are cases in which the single image of the source galaxy has been significantly distorted into an arc. The first two quadruply lensed candidates were reported by Ratnatunga et al. We report on the subsequent eight candidates and describe them with simple models based on the assumption of singular isothermal potentials. Residuals from the simple models for some of the candidates indicate that a more complex model for the potential will probably be required to explain the full structural detail of the observations once they are confirmed to be lenses. We also discuss the effective survey area that was searched for these candidate lens objects.
Image quality assessment using deep convolutional networks
NASA Astrophysics Data System (ADS)
Li, Yezhou; Ye, Xiang; Li, Yong
2017-12-01
This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.
Single-exposure color digital holography
NASA Astrophysics Data System (ADS)
Feng, Shaotong; Wang, Yanhui; Zhu, Zhuqing; Nie, Shouping
2010-11-01
In this paper, we report a method for color image reconstruction by recording only one single multi-wavelength hologram. In the recording process, three lasers of different wavelengths emitting in the red, green and blue regions are used for illuminating on the object and the object diffraction fields will arrive at the hologram plane simultaneously. Three reference beams with different spatial angles will interfere with the corresponding object diffraction fields on the hologram plane, respectively. Finally, a series of sub-holograms incoherently overlapped on the CCD to be recorded as a multi-wavelength hologram. Angular division multiplexing is employed to reference beams so that the spatial spectra of the multiple recordings will be separated in the Fourier plane. In the reconstruction process, the multi-wavelength hologram will be Fourier transformed into its Fourier plane, where the spatial spectra of different wavelengths are separated and can be easily extracted by employing frequency filtering. The extracted spectra are used to reconstruct the corresponding monochromatic complex amplitudes, which will be synthesized to reconstruct the color image. For singleexposure recording technique, it is convenient for applications on the real-time image processing fields. However, the quality of the reconstructed images is affected by speckle noise. How to improve the quality of the images needs for further research.
Ball-scale based hierarchical multi-object recognition in 3D medical images
NASA Astrophysics Data System (ADS)
Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian
2010-03-01
This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.
2015-10-01
malignant PNs treated with stereotactic ablative radiotherapy ( SABR ) with those of the lung. Methods: We analyzed breath-hold images of 30...patients with malignant PNs who underwent SABR in our department. A parametric nonrigid transformation model based on multi-level B-spline guided by Sum of...and 50 of 4D CT and deep inhale and natural exhale of breath-hold CT images of 30 MPN treated with stereotactic ablative radiotherapy ( SABR ). The
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyland, Kristina; Lacy, Mark; Sajina, Anna
We apply The Tractor image modeling code to improve upon existing multi-band photometry for the Spitzer Extragalactic Representative Volume Survey (SERVS). SERVS consists of post-cryogenic Spitzer observations at 3.6 and 4.5 μ m over five well-studied deep fields spanning 18 deg{sup 2}. In concert with data from ground-based near-infrared (NIR) and optical surveys, SERVS aims to provide a census of the properties of massive galaxies out to z ≈ 5. To accomplish this, we are using The Tractor to perform “forced photometry.” This technique employs prior measurements of source positions and surface brightness profiles from a high-resolution fiducial band from themore » VISTA Deep Extragalactic Observations survey to model and fit the fluxes at lower-resolution bands. We discuss our implementation of The Tractor over a square-degree test region within the XMM Large Scale Structure field with deep imaging in 12 NIR/optical bands. Our new multi-band source catalogs offer a number of advantages over traditional position-matched catalogs, including (1) consistent source cross-identification between bands, (2) de-blending of sources that are clearly resolved in the fiducial band but blended in the lower resolution SERVS data, (3) a higher source detection fraction in each band, (4) a larger number of candidate galaxies in the redshift range 5 < z < 6, and (5) a statistically significant improvement in the photometric redshift accuracy as evidenced by the significant decrease in the fraction of outliers compared to spectroscopic redshifts. Thus, forced photometry using The Tractor offers a means of improving the accuracy of multi-band extragalactic surveys designed for galaxy evolution studies. We will extend our application of this technique to the full SERVS footprint in the future.« less
1996-01-29
In this false color image of Neptune, objects that are deep in the atmosphere are blue, while those at higher altitudes are white. The image was taken by Voyager 2 wide-angle camera through an orange filter and two different methane filters. http://photojournal.jpl.nasa.gov/catalog/PIA00051
VizieR Online Data Catalog: R-band light curves of type II supernovae (Rubin+, 2016)
NASA Astrophysics Data System (ADS)
Rubin, A.; Gal-Yam, A.; De Cia, A.; Horesh, A.; Khazov, D.; Ofek, E. O.; Kulkarni, S. R.; Arcavi, I.; Manulis, I.; Yaron, O.; Vreeswijk, P.; Kasliwal, M. M.; Ben-Ami, S.; Perley, D. A.; Cao, Y.; Cenko, S. B.; Rebbapragada, U. D.; Wozniak, P. R.; Filippenko, A. V.; Clubb, K. I.; Nugent, P. E.; Pan, Y.-C.; Badenes, C.; Howell, D. A.; Valenti, S.; Sand, D.; Sollerman, J.; Johansson, J.; Leonard, D. C.; Horst, J. C.; Armen, S. F.; Fedrow, J. M.; Quimby, R. M.; Mazzali, P.; Pian, E.; Sternberg, A.; Matheson, T.; Sullivan, M.; Maguire, K.; Lazarevic, S.
2016-05-01
Our sample consists of 57 SNe from the PTF (Law et al. 2009PASP..121.1395L; Rau et al. 2009PASP..121.1334R) and the intermediate Palomar Transient Factory (iPTF; Kulkarni 2013ATel.4807....1K) surveys. Data were routinely collected by the Palomar 48-inch survey telescope in the Mould R-band. Follow-up observations were conducted mainly with the robotic 60-inch telescope using an SDSS r-band filter, with additional telescopes providing supplementary photometry and spectroscopy (see Gal-Yam et al. 2011, J/ApJ/736/159). The full list of SNe, their coordinates, and classification spectra are presented in Table 1. Most of the spectra were obtained with the Double Spectrograph on the 5m Hale telescope at Palomar Observatory, the Kast spectrograph on the Shane 3m telescope at Lick Observatory, the Low Resolution Imaging Spectrometer (LRIS) on the Keck I 10m telescope, and the DEep Imaging Multi-Object Spectrograph (DEIMOS) on the Keck II 10m telescope. (2 data files).
NASA Astrophysics Data System (ADS)
Bouter, Anton; Alderliesten, Tanja; Bosman, Peter A. N.
2017-02-01
Taking a multi-objective optimization approach to deformable image registration has recently gained attention, because such an approach removes the requirement of manually tuning the weights of all the involved objectives. Especially for problems that require large complex deformations, this is a non-trivial task. From the resulting Pareto set of solutions one can then much more insightfully select a registration outcome that is most suitable for the problem at hand. To serve as an internal optimization engine, currently used multi-objective algorithms are competent, but rather inefficient. In this paper we largely improve upon this by introducing a multi-objective real-valued adaptation of the recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) for discrete optimization. In this work, GOMEA is tailored specifically to the problem of deformable image registration to obtain substantially improved efficiency. This improvement is achieved by exploiting a key strength of GOMEA: iteratively improving small parts of solutions, allowing to faster exploit the impact of such updates on the objectives at hand through partial evaluations. We performed experiments on three registration problems. In particular, an artificial problem containing a disappearing structure, a pair of pre- and post-operative breast CT scans, and a pair of breast MRI scans acquired in prone and supine position were considered. Results show that compared to the previously used evolutionary algorithm, GOMEA obtains a speed-up of up to a factor of 1600 on the tested registration problems while achieving registration outcomes of similar quality.
A spherical aberration-free microscopy system for live brain imaging.
Ue, Yoshihiro; Monai, Hiromu; Higuchi, Kaori; Nishiwaki, Daisuke; Tajima, Tetsuya; Okazaki, Kenya; Hama, Hiroshi; Hirase, Hajime; Miyawaki, Atsushi
2018-06-02
The high-resolution in vivo imaging of mouse brain for quantitative analysis of fine structures, such as dendritic spines, requires objectives with high numerical apertures (NAs) and long working distances (WDs). However, this imaging approach is often hampered by spherical aberration (SA) that results from the mismatch of refractive indices in the optical path and becomes more severe with increasing depth of target from the brain surface. Whereas a revolving objective correction collar has been designed to compensate SA, its adjustment requires manual operation and is inevitably accompanied by considerable focal shift, making it difficult to acquire the best image of a given fluorescent object. To solve the problems, we have created an objective-attached device and formulated a fast iterative algorithm for the realization of an automatic SA compensation system. The device coordinates the collar rotation and the Z-position of an objective, enabling correction collar adjustment while stably focusing on a target. The algorithm provides the best adjustment on the basis of the calculated contrast of acquired images. Together, they enable the system to compensate SA at a given depth. As proof of concept, we applied the SA compensation system to in vivo two-photon imaging with a 25 × water-immersion objective (NA, 1.05; WD, 2 mm). It effectively reduced SA regardless of location, allowing quantitative and reproducible analysis of fine structures of YFP-labeled neurons in the mouse cerebral cortical layers. Interestingly, although the cortical structure was optically heterogeneous along the z-axis, the refractive index of each layer could be assessed on the basis of the compensation degree. It was also possible to make fully corrected three-dimensional reconstructions of YFP-labeled neurons in live brain samples. Our SA compensation system, called Deep-C, is expected to bring out the best in all correction-collar-equipped objectives for imaging deep regions of heterogeneous tissues. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Automatic 3D power line reconstruction of multi-angular imaging power line inspection system
NASA Astrophysics Data System (ADS)
Zhang, Wuming; Yan, Guangjian; Wang, Ning; Li, Qiaozhi; Zhao, Wei
2007-06-01
We develop a multi-angular imaging power line inspection system. Its main objective is to monitor the relative distance between high voltage power line and around objects, and alert if the warning threshold is exceeded. Our multi-angular imaging power line inspection system generates DSM of the power line passage, which comprises ground surface and ground objects, for example trees and houses, etc. For the purpose of revealing the dangerous regions, where ground objects are too close to the power line, 3D power line information should be extracted at the same time. In order to improve the automation level of extraction, reduce labour costs and human errors, an automatic 3D power line reconstruction method is proposed and implemented. It can be achieved by using epipolar constraint and prior knowledge of pole tower's height. After that, the proper 3D power line information can be obtained by space intersection using found homologous projections. The flight experiment result shows that the proposed method can successfully reconstruct 3D power line, and the measurement accuracy of the relative distance satisfies the user requirement of 0.5m.
High-speed railway real-time localization auxiliary method based on deep neural network
NASA Astrophysics Data System (ADS)
Chen, Dongjie; Zhang, Wensheng; Yang, Yang
2017-11-01
High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.
VizieR Online Data Catalog: MACT survey. I. Opt. spectroscopy in Subaru Deep Field (Ly+, 2016)
NASA Astrophysics Data System (ADS)
Ly, C.; Malhotra, S.; Malkan, M. A.; Rigby, J. R.; Kashikawa, N.; de Los Reyes, M. A.; Rhoads, J. E.
2016-10-01
The primary results of this paper are based on optical spectroscopy conducted with Keck's Deep Imaging Multi-Object Spectrograph (DEIMOS) and MMT's Hectospec. In total, we obtain 3243 optical spectra for 1911 narrowband/intermediate-band excess emitters (roughly 20% of our narrowband/intermediate-band excess samples), and successfully detect emission lines to determine redshift for 1493 galaxies or 78% of the targeted sample. The MMT observations were conducted on 2008 March 13, 2008 April 10-11, 2008 April 14, 2014 February 27-28, 2014 March 25, and 2014 March 28-31, and correspond to the equivalent of three full nights. The Keck observations were conducted on 2004 April 23-24, 2008 May 01-02, 2009 April 25-28, 2014 May 02, and 2015 March 17/19/26. The majority of the observations were obtained in 2014-2015. The 2004 spectroscopic observations have been discussed in Kashikawa et al. (2006, J/ApJ/648/7) and Ly07 (J/ApJ/657/738), and the 2008-2009 data have been discussed in Kashikawa et al. (2011ApJ...734..119K). See section 2.2 for further details. The Subaru Deep Field (SDF) has been imaged with: (1) GALEX in both the FUV and NUV bands; (2) KPNO's Mayall telescope using MOSAIC in U; (3) Subaru telescope with Suprime-Cam in 14 bands (BVRci'z'zbzr), and five narrowband and two intermediate-band filters); (4) KPNO's Mayall telescope using NEWFIRM in H; (5) UKIRT using WFCAM in J and K; and (6) Spitzer in the four IRAC bands (3.6, 4.5, 5.8, and 8.0um). Most of these imaging data have been discussed in Ly et al. (2011ApJ...735...91L), except for the WFCAM J-band data and most of the NEWFIRM H-band data. The more recent NEWFIRM imaging data were acquired on 2012 March 06-07 and 2013 March 27-30. The WFCAM data were obtained on 2005 April 14-15, 2010 March 15-20, and 2010 April 22-23. See section 4.4 for further details. (11 data files).
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
Choi, Hongyoon; Ha, Seunggyun; Im, Hyung Jun; Paek, Sun Ha; Lee, Dong Soo
2017-01-01
Dopaminergic degeneration is a pathologic hallmark of Parkinson's disease (PD), which can be assessed by dopamine transporter imaging such as FP-CIT SPECT. Until now, imaging has been routinely interpreted by human though it can show interobserver variability and result in inconsistent diagnosis. In this study, we developed a deep learning-based FP-CIT SPECT interpretation system to refine the imaging diagnosis of Parkinson's disease. This system trained by SPECT images of PD patients and normal controls shows high classification accuracy comparable with the experts' evaluation referring quantification results. Its high accuracy was validated in an independent cohort composed of patients with PD and nonparkinsonian tremor. In addition, we showed that some patients clinically diagnosed as PD who have scans without evidence of dopaminergic deficit (SWEDD), an atypical subgroup of PD, could be reclassified by our automated system. Our results suggested that the deep learning-based model could accurately interpret FP-CIT SPECT and overcome variability of human evaluation. It could help imaging diagnosis of patients with uncertain Parkinsonism and provide objective patient group classification, particularly for SWEDD, in further clinical studies.
Research on simulated infrared image utility evaluation using deep representation
NASA Astrophysics Data System (ADS)
Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin
2018-01-01
Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-01-01
Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.
NASA Astrophysics Data System (ADS)
White, Glenn; Kohno, Kotaro; Matsuhara, Hideo; Matsuura, Shuji; Hanami, Hitoshi; Lee, Hyung Mok; Pearson, Chris; Takagi, Toshi; Serjeant, Stephen; Jeong, Woongseob; Oyabu, Shinki; Shirahata, Mai; Nakanishi, Kouichiro; Figueredo, Elysandra; Etxaluze, Mireya
2007-04-01
We propose deep 20 cm observations supporting the AKARI (3-160 micron)/ASTE/AzTEC (1.1 mm) SEP ultra deep ('Oyabu Field') survey of an extremely low cirrus region at the South Ecliptic Pole. Our combined IR/mm/Radio survey addresses the questions: How do protogalaxies and protospheroids form and evolve? How do AGN link with ULIRGs in their birth and evolution? What is the nature of the mm/submm extragalactic source population? We will address these by sampling the star formation history in the early universe to at least z~2. Compared to other Deep Surveys, a) AKARI multi-band IR measurements allow precision photo-z estimates of optically obscured objects, b) our multi-waveband contiguous area will mitigate effects of cosmic variance, c) the low cirrus noise at the SEP (< 0.08 MJy/sr) rivals that of the Lockman Hole "Astronomy's other ultra-deep 'cosmological window'", and d) our coverage of four FIR bands will characterise the far-IR dust emission hump of our starburst galaxies better than SPITZER's two MIPS bands allow. The ATCA data are crucial to galaxy identification, and determining the star formation rates and intrinsic luminosities through this unique Southern cosmological window.
Computational optical tomography using 3-D deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Nguyen, Thanh; Bui, Vy; Nehmetallah, George
2018-04-01
Deep convolutional neural networks (DCNNs) offer a promising performance for many image processing areas, such as super-resolution, deconvolution, image classification, denoising, and segmentation, with outstanding results. Here, we develop for the first time, to our knowledge, a method to perform 3-D computational optical tomography using 3-D DCNN. A simulated 3-D phantom dataset was first constructed and converted to a dataset of phase objects imaged on a spatial light modulator. For each phase image in the dataset, the corresponding diffracted intensity image was experimentally recorded on a CCD. We then experimentally demonstrate the ability of the developed 3-D DCNN algorithm to solve the inverse problem by reconstructing the 3-D index of refraction distributions of test phantoms from the dataset from their corresponding diffraction patterns.
Using transfer learning to detect galaxy mergers
NASA Astrophysics Data System (ADS)
Ackermann, Sandro; Schawinksi, Kevin; Zhang, Ce; Weigel, Anna K.; Turp, M. Dennis
2018-05-01
We investigate the use of deep convolutional neural networks (deep CNNs) for automatic visual detection of galaxy mergers. Moreover, we investigate the use of transfer learning in conjunction with CNNs, by retraining networks first trained on pictures of everyday objects. We test the hypothesis that transfer learning is useful for improving classification performance for small training sets. This would make transfer learning useful for finding rare objects in astronomical imaging datasets. We find that these deep learning methods perform significantly better than current state-of-the-art merger detection methods based on nonparametric systems like CAS and GM20. Our method is end-to-end and robust to image noise and distortions; it can be applied directly without image preprocessing. We also find that transfer learning can act as a regulariser in some cases, leading to better overall classification accuracy (p = 0.02). Transfer learning on our full training set leads to a lowered error rate from 0.0381 down to 0.0321, a relative improvement of 15%. Finally, we perform a basic sanity-check by creating a merger sample with our method, and comparing with an already existing, manually created merger catalogue in terms of colour-mass distribution and stellar mass function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Sarah H.; Sullivan, Mark; Bundy, Kevin
2011-11-10
We present new measures of the evolving scaling relations between stellar mass, luminosity and rotational velocity for a morphologically inclusive sample of 129 disk-like galaxies with z{sub AB} < 22.5 in the redshift range 0.2
2D and 3D X-ray phase retrieval of multi-material objects using a single defocus distance.
Beltran, M A; Paganin, D M; Uesugi, K; Kitchen, M J
2010-03-29
A method of tomographic phase retrieval is developed for multi-material objects whose components each has a distinct complex refractive index. The phase-retrieval algorithm, based on the Transport-of-Intensity equation, utilizes propagation-based X-ray phase contrast images acquired at a single defocus distance for each tomographic projection. The method requires a priori knowledge of the complex refractive index for each material present in the sample, together with the total projected thickness of the object at each orientation. The requirement of only a single defocus distance per projection simplifies the experimental setup and imposes no additional dose compared to conventional tomography. The algorithm was implemented using phase contrast data acquired at the SPring-8 Synchrotron facility in Japan. The three-dimensional (3D) complex refractive index distribution of a multi-material test object was quantitatively reconstructed using a single X-ray phase-contrast image per projection. The technique is robust in the presence of noise, compared to conventional absorption based tomography.
NASA Tech Briefs, December 2009
NASA Technical Reports Server (NTRS)
2009-01-01
Topics include: A Deep Space Network Portable Radio Science Receiver; Detecting Phase Boundaries in Hard-Sphere Suspensions; Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery; Very-Long-Distance Remote Hearing and Vibrometry; Using GPS to Detect Imminent Tsunamis; Stream Flow Prediction by Remote Sensing and Genetic Programming; Pilotless Frame Synchronization Using LDPC Code Constraints; Radiometer on a Chip; Measuring Luminescence Lifetime With Help of a DSP; Modulation Based on Probability Density Functions; Ku Telemetry Modulator for Suborbital Vehicles; Photonic Links for High-Performance Arraying of Antennas; Reconfigurable, Bi-Directional Flexfet Level Shifter for Low-Power, Rad-Hard Integration; Hardware-Efficient Monitoring of I/O Signals; Video System for Viewing From a Remote or Windowless Cockpit; Spacesuit Data Display and Management System; IEEE 1394 Hub With Fault Containment; Compact, Miniature MMIC Receiver Modules for an MMIC Array Spectrograph; Waveguide Transition for Submillimeter-Wave MMICs; Magnetic-Field-Tunable Superconducting Rectifier; Bonded Invar Clip Removal Using Foil Heaters; Fabricating Radial Groove Gratings Using Projection Photolithography; Gratings Fabricated on Flat Surfaces and Reproduced on Non-Flat Substrates; Method for Measuring the Volume-Scattering Function of Water; Method of Heating a Foam-Based Catalyst Bed; Small Deflection Energy Analyzer for Energy and Angular Distributions; Polymeric Bladder for Storing Liquid Oxygen; Pyrotechnic Simulator/Stray-Voltage Detector; Inventions Utilizing Microfluidics and Colloidal Particles; RuO2 Thermometer for Ultra-Low Temperatures; Ultra-Compact, High-Resolution LADAR System for 3D Imaging; Dual-Channel Multi-Purpose Telescope; Objective Lens Optimized for Wavefront Delivery, Pupil Imaging, and Pupil Ghosting; CMOS Camera Array With Onboard Memory; Quickly Approximating the Distance Between Two Objects; Processing Images of Craters for Spacecraft Navigation; Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System; Rover Slip Validation and Prediction Algorithm; Safety and Quality Training Simulator; Supply-Chain Optimization Template; Algorithm for Computing Particle/Surface Interactions; Cryogenic Pupil Alignment Test Architecture for Aberrated Pupil Images; and Thermal Transport Model for Heat Sink Design.
Building the Case for SNAP: Creation of Multi-Band, Simulated Images With Shapelets
NASA Technical Reports Server (NTRS)
Ferry, Matthew A.
2005-01-01
Dark energy has simultaneously been the most elusive and most important phenomenon in the shaping of the universe. A case for a proposed space-telescope called SNAP (SuperNova Acceleration Probe) is being built, a crucial component of which is image simulations. One method for this is "Shapelets," developed at Caltech. Shapelets form an orthonormal basis and are uniquely able to represent realistic space images and create new images based on real ones. Previously, simulations were created using the Hubble Deep Field (HDF) as a basis Set in one band. In this project, image simulations are created.using the 4 bands of the Hubble Ultra Deep Field (UDF) as a basis set. This provides a better basis for simulations because (1) the survey is deeper, (2) they have a higher resolution, and (3) this is a step closer to simulating the 9 bands of SNAP. Image simulations are achieved by detecting sources in the UDF, decomposing them into shapelets, tweaking their parameters in realistic ways, and recomposing them into new images. Morphological tests were also run to verify the realism of the simulations. They have a wide variety of uses, including the ability to create weak gravitational lensing simulations.
Bessel light sheet structured illumination microscopy
NASA Astrophysics Data System (ADS)
Noshirvani Allahabadi, Golchehr
Biomedical study researchers using animals to model disease and treatment need fast, deep, noninvasive, and inexpensive multi-channel imaging methods. Traditional fluorescence microscopy meets those criteria to an extent. Specifically, two-photon and confocal microscopy, the two most commonly used methods, are limited in penetration depth, cost, resolution, and field of view. In addition, two-photon microscopy has limited ability in multi-channel imaging. Light sheet microscopy, a fast developing 3D fluorescence imaging method, offers attractive advantages over traditional two-photon and confocal microscopy. Light sheet microscopy is much more applicable for in vivo 3D time-lapsed imaging, owing to its selective illumination of tissue layer, superior speed, low light exposure, high penetration depth, and low levels of photobleaching. However, standard light sheet microscopy using Gaussian beam excitation has two main disadvantages: 1) the field of view (FOV) of light sheet microscopy is limited by the depth of focus of the Gaussian beam. 2) Light-sheet images can be degraded by scattering, which limits the penetration of the excitation beam and blurs emission images in deep tissue layers. While two-sided sheet illumination, which doubles the field of view by illuminating the sample from opposite sides, offers a potential solution, the technique adds complexity and cost to the imaging system. We investigate a new technique to address these limitations: Bessel light sheet microscopy in combination with incoherent nonlinear Structured Illumination Microscopy (SIM). Results demonstrate that, at visible wavelengths, Bessel excitation penetrates up to 250 microns deep in the scattering media with single-side illumination. Bessel light sheet microscope achieves confocal level resolution at a lateral resolution of 0.3 micron and an axial resolution of 1 micron. Incoherent nonlinear SIM further reduces the diffused background in Bessel light sheet images, resulting in confocal quality images in thick tissue. The technique was applied to live transgenic zebra fish tg(kdrl:GFP), and the sub-cellular structure of fish vasculature genetically labeled with GFP was captured in 3D. The superior speed of the microscope enables us to acquire signal from 200 layers of a thick sample in 4 minutes. The compact microscope uses exclusively off-the-shelf components and offers a low-cost imaging solution for studying small animal models or tissue samples.
NASA Technical Reports Server (NTRS)
1997-01-01
Clouds and hazes at various altitudes within the dynamic Jovian atmosphere are revealed by multi-color imaging taken by the Near-Infrared Mapping Spectrometer (NIMS) onboard the Galileo spacecraft. These images were taken during the second orbit (G2) on September 5, 1996 from an early-morning vantage point 2.1 million kilometers (1.3 million miles) above Jupiter. They show the planet's appearance as viewed at various near-infrared wavelengths, with distinct differences due primarily to variations in the altitudes and opacities of the cloud systems. The top left and right images, taken at 1.61 microns and 2.73 microns respectively, show relatively clear views of the deep atmosphere, with clouds down to a level about three times the atmospheric pressure at the Earth's surface.
By contrast, the middle image in top row, taken at 2.17 microns, shows only the highest altitude clouds and hazes. This wavelength is severely affected by the absorption of light by hydrogen gas, the main constituent of Jupiter's atmosphere. Therefore, only the Great Red Spot, the highest equatorial clouds, a small feature at mid-northern latitudes, and thin, high photochemical polar hazes can be seen. In the lower left image, at 3.01 microns, deeper clouds can be seen dimly against gaseous ammonia and methane absorption. In the lower middle image, at 4.99 microns, the light observed is the planet's own indigenous heat from the deep, warm atmosphere.The false color image (lower right) succinctly shows various cloud and haze levels seen in the Jovian atmosphere. This image indicates the temperature and altitude at which the light being observed is produced. Thermally-rich red areas denote high temperatures from photons in the deep atmosphere leaking through minimal cloud cover; green denotes cool temperatures of the tropospheric clouds; blue denotes cold of the upper troposphere and lower stratosphere. The polar regions appear purplish, because small-particle hazes allow leakage and reflectivity, while yellowish regions at temperate latitudes may indicate tropospheric clouds with small particles which also allow leakage. A mix of high and low-altitude aerosols causes the aqua appearance of the Great Red Spot and equatorial region.The Jet Propulsion Laboratory manages the Galileo mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web Galileo mission home page at http://galileo.jpl.nasa.gov.Object-Part Attention Model for Fine-Grained Image Classification
NASA Astrophysics Data System (ADS)
Peng, Yuxin; He, Xiangteng; Zhao, Junjie
2018-03-01
Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: (1) Relying on object or part annotations which are heavily labor consuming. (2) Ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification, and the main novelties are: (1) Object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotions. (2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative, and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Comparing with more than 10 state-of-the-art methods on 4 widely-used datasets, our OPAM approach achieves the best performance.
Recognition-by-Components: A Theory of Human Image Understanding.
ERIC Educational Resources Information Center
Biederman, Irving
1987-01-01
The theory proposed (recognition-by-components) hypothesizes the perceptual recognition of objects to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components. Experiments on the perception of briefly presented pictures support the theory. (Author/LMO)
The Hubble Deep UV Legacy Survey (HDUV)
NASA Astrophysics Data System (ADS)
Montes, Mireia; Oesch, Pascal
2015-08-01
Deep HST imaging has shown that the overall star formation density and UV light density at z>3 is dominated by faint, blue galaxies. Remarkably, very little is known about the equivalent galaxy population at lower redshifts. Understanding how these galaxies evolve across the epoch of peak cosmic star-formation is key to a complete picture of galaxy evolution. Here, we present a new HST WFC3/UVIS program, the Hubble Deep UV (HDUV) legacy survey. The HDUV is a 132 orbit program to obtain deep imaging in two filters (F275W and F336W) over the two CANDELS Deep fields. We will cover ~100 arcmin2 sampling the rest-frame far-UV at z>~0.5, this will provide a unique legacy dataset with exquisite HST multi-wavelength imaging as well as ancillary HST grism NIR spectroscopy for a detailed study of faint, star-forming galaxies at z~0.5-2. The HDUV will enable a wealth of research by the community, which includes tracing the evolution of the FUV luminosity function over the peak of the star formation rate density from z~3 down to z~0.5, measuring the physical properties of sub-L* galaxies, and characterizing resolved stellar populations to decipher the build-up of the Hubble sequence from sub-galactic clumps. This poster provides an overview of the HDUV survey and presents the reduced data products and catalogs which will be released to the community, reaching down to 27.5-28.0 mag at 5 sigma. By directly sampling the rest-frame far-UV at z>~0.5, this will provide a unique legacy dataset with exquisite HST multi-wavelength imaging as well as ancillary HST grism NIR spectroscopy for a detailed study of faint, star-forming galaxies at z~0.5-2. The HDUV will enable a wealth of research by the community, which includes tracing the evolution of the FUV luminosity function over the peak of the star formation rate density from z~3 down to z~0.5, measuring the physical properties of sub-L* galaxies, and characterizing resolved stellar populations to decipher the build-up of the Hubble sequence from sub-galactic clumps. This poster provides an overview of the HDUV survey and presents reduced data products and catalogs which will be released to the community.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
Techniques of noninvasive optical tomographic imaging
NASA Astrophysics Data System (ADS)
Rosen, Joseph; Abookasis, David; Gokhler, Mark
2006-01-01
Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.
NASA Astrophysics Data System (ADS)
Treu, Tommaso; Abramson, L.; Bradac, M.; Brammer, G.; Fontana, A.; Henry, A.; Hoag, A.; Huang, K.; Mason, C.; Morishita, T.; Pentericci, L.; Wang, X.
2017-11-01
We propose a carefully designed set of observations of the lensing cluster Abell 2744 to study intrinsically faint magnified galaxies from the epoch of reionization to redshift of 1, demonstrating and characterizing complementary spectroscopic modes with NIRSPEC and NIRISS. The observations are designed to address the questions: 1) when did reionization happen and what were the sources of reionizing photons? 2) How do baryons cycle in and out of galaxies? This dataset with deep spectroscopy on the cluster and deep multiband NIRCAM imaging in parallel will enable a wealth of investigations and will thus be of interest to a broad section of the astronomical community. The dataset will illustrate the power and challenges of: 1) combining rest frame UV and optical NIRSPEC spectroscopy for galaxies at the epoch of reionization, 2) obtaining spatially resolved emission line maps with NIRISS, 3) combining NIRISS and NIRSPEC spectroscopy. Building on our extensive experience with HST slitless spectroscopy and imaging in clusters of galaxies as part of the GLASS, WISP, SURFSUP, and ASTRODEEP projects, we will provide the following science-enabling products to the community: 1)quantitative comparison of spatially resolved (NIRISS) and spectrally resolved (NIRSPEC) spectroscopy, 2) Object based interactive exploration tools for multi-instrument datasets, 3) Interface for easy forced extractionof slitless spectra based on coordinates, 4) UV-optical spectroscopic templates of highredshift galaxies, 5) NIRCAM parallel catalogs and a list of 26 z>=9 dropouts for spectroscopic follow-up in Cycle-2.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Multi-class segmentation of neuronal electron microscopy images using deep learning
NASA Astrophysics Data System (ADS)
Khobragade, Nivedita; Agarwal, Chirag
2018-03-01
Study of connectivity of neural circuits is an essential step towards a better understanding of functioning of the nervous system. With the recent improvement in imaging techniques, high-resolution and high-volume images are being generated requiring automated segmentation techniques. We present a pixel-wise classification method based on Bayesian SegNet architecture. We carried out multi-class segmentation on serial section Transmission Electron Microscopy (ssTEM) images of Drosophila third instar larva ventral nerve cord, labeling the four classes of neuron membranes, neuron intracellular space, mitochondria and glia / extracellular space. Bayesian SegNet was trained using 256 ssTEM images of 256 x 256 pixels and tested on 64 different ssTEM images of the same size, from the same serial stack. Due to high class imbalance, we used a class-balanced version of Bayesian SegNet by re-weighting each class based on their relative frequency. We achieved an overall accuracy of 93% and a mean class accuracy of 88% for pixel-wise segmentation using this encoder-decoder approach. On evaluating the segmentation results using similarity metrics like SSIM and Dice Coefficient, we obtained scores of 0.994 and 0.886 respectively. Additionally, we used the network trained using the 256 ssTEM images of Drosophila third instar larva for multi-class labeling of ISBI 2012 challenge ssTEM dataset.
Maximizing fluorescence collection efficiency in multiphoton microscopy
Zinter, Joseph P.; Levene, Michael J.
2011-01-01
Understanding fluorescence propagation through a multiphoton microscope is of critical importance in designing high performance systems capable of deep tissue imaging. Optical models of a scattering tissue sample and the Olympus 20X 0.95NA microscope objective were used to simulate fluorescence propagation as a function of imaging depth for physiologically relevant scattering parameters. The spatio-angular distribution of fluorescence at the objective back aperture derived from these simulations was used to design a simple, maximally efficient post-objective fluorescence collection system. Monte Carlo simulations corroborated by data from experimental tissue phantoms demonstrate collection efficiency improvements of 50% – 90% over conventional, non-optimized fluorescence collection geometries at large imaging depths. Imaging performance was verified by imaging layer V neurons in mouse cortex to a depth of 850 μm. PMID:21934897
NASA Astrophysics Data System (ADS)
Chia, Thomas H.
Multiphoton microscopy is a laser-scanning fluorescence imaging method with extraordinary potential. We describe three innovative multiphoton microscopy techniques across various disciplines. Traditional in vivo fluorescence microscopy of the mammalian brain has a limited penetration depth (<400 microm). We present a method of imaging 1 mm deep into mouse neocortex by using a glass microprism to relay the excitation and emission light. This technique enables simultaneous imaging of multiple cortical layers, including layer V, at an angle typical of slice preparations. At high-magnification imaging using an objective with 1-mm of coverglass correction, resolution was sufficient to resolve dendritic spines on layer V GFP neurons. Functional imaging of blood flow at various neocortical depths is also presented, allowing for quantification of red blood cell flux and velocity. Multiphoton fluorescence lifetime imaging (FLIM) of NADH reveals information on neurometabolism. NADH, an intrinsic fluorescent molecule and ubiquitous metabolic coenzyme, has a lifetime dependent on enzymatic binding. A novel NADH FLIM algorithm is presented that produces images showing spatially distinct NADH fluorescence lifetimes in mammalian brain slices. This program provides advantages over traditional FLIM processing of multi-component lifetime data. We applied this technique to a GFP-GFAP pilocarpine mouse model of temporal lobe epilepsy. Results indicated significant changes in the neurometabolism of astrocytes and neuropil in the cell and dendritic layers of the hippocampus when compared to control tissue. Data obtained with NADH FLIM were subsequently interpreted based on the abnormal activity reported in epileptic tissue. Genuine U.S. Federal Reserve Notes have a consistent, two-component intrinsic fluorescence lifetime. This allows for detection of counterfeit paper money because of its significant differences in fluorescence lifetime when compared to genuine paper money. We used scanning multiphoton laser excitation to sample a ˜4 mm2 region from 54 genuine Reserve Notes. Three types of counterfeit samples were tested. Four out of the nine counterfeit samples fit to a one-component decay. Five out of nine counterfeit samples fit to a two-component model, but are identified as counterfeit due to significant deviations in the longer lifetime component compared to genuine bills.
Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary
2011-08-01
Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.
Multi-spectral confocal microendoscope for in-vivo imaging
NASA Astrophysics Data System (ADS)
Rouse, Andrew Robert
The concept of in-vivo multi-spectral confocal microscopy is introduced. A slit-scanning multi-spectral confocal microendoscope (MCME) was built to demonstrate the technique. The MCME employs a flexible fiber-optic catheter coupled to a custom built slit-scan confocal microscope fitted with a custom built imaging spectrometer. The catheter consists of a fiber-optic imaging bundle linked to a miniature objective and focus assembly. The design and performance of the miniature objective and focus assembly are discussed. The 3mm diameter catheter may be used on its own or routed though the instrument channel of a commercial endoscope. The confocal nature of the system provides optical sectioning with 3mum lateral resolution and 30mum axial resolution. The prism based multi-spectral detection assembly is typically configured to collect 30 spectral samples over the visible chromatic range. The spectral sampling rate varies from 4nm/pixel at 490nm to 8nm/pixel at 660nm and the minimum resolvable wavelength difference varies from 7nm to 18nm over the same spectral range. Each of these characteristics are primarily dictated by the dispersive power of the prism. The MCME is designed to examine cellular structures during optical biopsy and to exploit the diagnostic information contained within the spectral domain. The primary applications for the system include diagnosis of disease in the gastro-intestinal tract and female reproductive system. Recent data from the grayscale imaging mode are presented. Preliminary multi-spectral results from phantoms, cell cultures, and excised human tissue are presented to demonstrate the potential of in-vivo multi-spectral imaging.
Multi-dimension feature fusion for action recognition
NASA Astrophysics Data System (ADS)
Dong, Pei; Li, Jie; Dong, Junyu; Qi, Lin
2018-04-01
Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. The challenge for action recognition is to capture and fuse the multi-dimension information in video data. In order to take into account these characteristics simultaneously, we present a novel method that fuses multiple dimensional features, such as chromatic images, depth and optical flow fields. We built our model based on the multi-stream deep convolutional networks with the help of temporal segment networks and extract discriminative spatial and temporal features by fusing ConvNets towers multi-dimension, in which different feature weights are assigned in order to take full advantage of this multi-dimension information. Our architecture is trained and evaluated on the currently largest and most challenging benchmark NTU RGB-D dataset. The experiments demonstrate that the performance of our method outperforms the state-of-the-art methods.
Present situation and trend of precision guidance technology and its intelligence
NASA Astrophysics Data System (ADS)
Shang, Zhengguo; Liu, Tiandong
2017-11-01
This paper first introduces the basic concepts of precision guidance technology and artificial intelligence technology. Then gives a brief introduction of intelligent precision guidance technology, and with the help of development of intelligent weapon based on deep learning project in foreign: LRASM missile project, TRACE project, and BLADE project, this paper gives an overview of the current foreign precision guidance technology. Finally, the future development trend of intelligent precision guidance technology is summarized, mainly concentrated in the multi objectives, intelligent classification, weak target detection and recognition, intelligent between complex environment intelligent jamming and multi-source, multi missile cooperative fighting and other aspects.
Mission planning optimization of video satellite for ground multi-object staring imaging
NASA Astrophysics Data System (ADS)
Cui, Kaikai; Xiang, Junhua; Zhang, Yulin
2018-03-01
This study investigates the emergency scheduling problem of ground multi-object staring imaging for a single video satellite. In the proposed mission scenario, the ground objects require a specified duration of staring imaging by the video satellite. The planning horizon is not long, i.e., it is usually shorter than one orbit period. A binary decision variable and the imaging order are used as the design variables, and the total observation revenue combined with the influence of the total attitude maneuvering time is regarded as the optimization objective. Based on the constraints of the observation time windows, satellite attitude adjustment time, and satellite maneuverability, a constraint satisfaction mission planning model is established for ground object staring imaging by a single video satellite. Further, a modified ant colony optimization algorithm with tabu lists (Tabu-ACO) is designed to solve this problem. The proposed algorithm can fully exploit the intelligence and local search ability of ACO. Based on full consideration of the mission characteristics, the design of the tabu lists can reduce the search range of ACO and improve the algorithm efficiency significantly. The simulation results show that the proposed algorithm outperforms the conventional algorithm in terms of optimization performance, and it can obtain satisfactory scheduling results for the mission planning problem.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2006-06-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2008-07-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5' × 0.5') imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
Deepest X-Rays Ever Reveal universe Teeming With Black Holes
NASA Astrophysics Data System (ADS)
2001-03-01
For the first time, astronomers believe they have proof black holes of all sizes once ruled the universe. NASA's Chandra X-ray Observatory provided the deepest X-ray images ever recorded, and those pictures deliver a novel look at the past 12 billion years of black holes. Two independent teams of astronomers today presented images that contain the faintest X-ray sources ever detected, which include an abundance of active super massive black holes. "The Chandra data show us that giant black holes were much more active in the past than at present," said Riccardo Giacconi, of Johns Hopkins University and Associated Universities, Inc., Washington, DC. The exposure is known as "Chandra Deep Field South" since it is located in the Southern Hemisphere constellation of Fornax. "In this million-second image, we also detect relatively faint X-ray emission from galaxies, groups, and clusters of galaxies". The images, known as Chandra Deep Fields, were obtained during many long exposures over the course of more than a year. Data from the Chandra Deep Field South will be placed in a public archive for scientists beginning today. "For the first time, we are able to use X-rays to look back to a time when normal galaxies were several billion years younger," said Ann Hornschemeier, Pennsylvania State University, University Park. The group’s 500,000-second exposure included the Hubble Deep Field North, allowing scientists the opportunity to combine the power of Chandra and the Hubble Space Telescope, two of NASA's Great Observatories. The Penn State team recently acquired an additional 500,000 seconds of data, creating another one-million-second Chandra Deep Field, located in the constellation of Ursa Major. Chandra Deep Field North/Hubble Deep Field North Press Image and Caption The images are called Chandra Deep Fields because they are comparable to the famous Hubble Deep Field in being able to see further and fainter objects than any image of the universe taken at X-ray wavelengths. Both Chandra Deep Fields are comparable in observation time to the Hubble Deep Fields, but cover a much larger area of the sky. "In essence, it is like seeing galaxies similar to our own Milky Way at much earlier times in their lives," Hornschemeier added. "These data will help scientists better understand star formation and how stellar-sized black holes evolve." Combining infrared and X-ray observations, the Penn State team also found veils of dust and gas are common around young black holes. Another discovery to emerge from the Chandra Deep Field South is the detection of an extremely distant X-ray quasar, shrouded in gas and dust. "The discovery of this object, some 12 billion light years away, is key to understanding how dense clouds of gas form galaxies, with massive black holes at their centers," said Colin Norman of Johns Hopkins University. The Chandra Deep Field South results were complemented by the extensive use of deep optical observations supplied by the Very Large Telescope of the European Southern Observatory in Garching, Germany. The Penn State team obtained optical spectroscopy and imaging using the Hobby-Eberly Telescope in Ft. Davis, TX, and the Keck Observatory atop Mauna Kea, HI. Chandra's Advanced CCD Imaging Spectrometer was developed for NASA by Penn State and Massachusetts Institute of Technology under the leadership of Penn State Professor Gordon Garmire. NASA's Marshall Space Flight Center, Huntsville, AL, manages the Chandra program for the Office of Space Science, Washington, DC. TRW, Inc., Redondo Beach, California, is the prime contractor for the spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, MA. More information is available on the Internet at: http://chandra.harvard.edu AND http://chandra.nasa.gov
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
Li, Weitao; Huang, Dong; Zhang, Yan; Liu, Yangyang; Gu, Yueqing; Qian, Zhiyu
2016-09-01
Photodynamic therapy (PDT) is an effective noninvasive method for the tumor treatment. The major challenge in current PDT research is how to quantitatively evaluate therapy effects. To our best knowledge, this is the first time to combine multi-parameter detection methods in PDT. More specifically, we have developed a set of system, including the high-sensitivity measurement of singlet oxygen, oxygen partial pressure and fluorescence image. In this paper, the detection ability of the system was validated by the different concentrations of carbon quantum dots. Moreover, the correlation between singlet oxygen and oxygen partial pressure with laser irradiation was observed. Then, the system could detect the signal up to 0.5 cm tissue depth with 660 nm irradiation and 1 cm tissue depth with 980 nm irradiation by using up-conversion nanoparticles during PDT in vitro. Furthermore, we obtained the relationship among concentration of singlet oxygen, oxygen partial pressure and tumor cell viability under certain conditions. The results indicate that the multi-parameter detection system is a promising asset to evaluate the deep tumor therapy during PDT. Moreover, the system might be potentially used for the further study in biology and molecular imaging.
A deep learning framework to discern and count microscopic nematode eggs.
Akintayo, Adedotun; Tylka, Gregory L; Singh, Asheesh K; Ganapathysubramanian, Baskar; Singh, Arti; Sarkar, Soumik
2018-06-14
In order to identify and control the menace of destructive pests via microscopic image-based identification state-of-the art deep learning architecture is demonstrated on the parasitic worm, the soybean cyst nematode (SCN), Heterodera glycines. Soybean yield loss is negatively correlated with the density of SCN eggs that are present in the soil. While there has been progress in automating extraction of egg-filled cysts and eggs from soil samples counting SCN eggs obtained from soil samples using computer vision techniques has proven to be an extremely difficult challenge. Here we show that a deep learning architecture developed for rare object identification in clutter-filled images can identify and count the SCN eggs. The architecture is trained with expert-labeled data to effectively build a machine learning model for quantifying SCN eggs via microscopic image analysis. We show dramatic improvements in the quantification time of eggs while maintaining human-level accuracy and avoiding inter-rater and intra-rater variabilities. The nematode eggs are correctly identified even in complex, debris-filled images that are often difficult for experts to identify quickly. Our results illustrate the remarkable promise of applying deep learning approaches to phenotyping for pest assessment and management.
NASA Astrophysics Data System (ADS)
Deka, Gitanjal; Nishida, Kentaro; Mochizuki, Kentaro; Ding, Hou-Xian; Fujita, Katsumasa; Chu, Shi-Wei
2018-03-01
Recently, many resolution enhancing techniques are demonstrated, but most of them are severely limited for deep tissue applications. For example, wide-field based localization techniques lack the ability of optical sectioning, and structured light based techniques are susceptible to beam distortion due to scattering/aberration. Saturated excitation (SAX) microscopy, which relies on temporal modulation that is less affected when penetrating into tissues, should be the best candidate for deep-tissue resolution enhancement. Nevertheless, although fluorescence saturation has been successfully adopted in SAX, it is limited by photobleaching, and its practical resolution enhancement is less than two-fold. Recently, we demonstrated plasmonic SAX which provides bleaching-free imaging with three-fold resolution enhancement. Here we show that the three-fold resolution enhancement is sustained throughout the whole working distance of an objective, i.e., 200 μm, which is the deepest super-resolution record to our knowledge, and is expected to extend into deeper tissues. In addition, SAX offers the advantage of background-free imaging by rejecting unwanted scattering background from biological tissues. This study provides an inspirational direction toward deep-tissue super-resolution imaging and has the potential in tumor monitoring and beyond.
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
NASA Astrophysics Data System (ADS)
Harmon, Nicholas; Rychert, Catherine A.
2015-11-01
Continental crust formed billions of years ago but cannot be explained by a simple evolution of primary mantle magmas. A multi-step process is required that likely includes re-melting of wet metamorphosed basalt at high pressures. Such a process could occur at depth in oceanic crust that has been thickened by a large magmatic event. In Central America, variations in geologically inferred, pre-existing oceanic crustal thickness beneath the arc provides an excellent opportunity to study its effect on magma storage, re-melting of meta-basalts, and the potential for creating continental crust. We use surface waves derived from ambient noise tomography to image 6% radially anisotropic structures in the thickened oceanic plateau crust of Costa Rica that likely represent deep crustal melt sills. In Nicaragua, where the arc is forming on thinner oceanic crust, we do not image these deep crustal melt sills. The presence of these deep sills correlates with more felsic arc outputs from the Costa Rican Arc suggesting pre-existing thickened crust accelerates processing of primary basalts to continental compositions. In the Archean, reprocessing thickened oceanic crust by subsequent hydrated hotspot volcanism or subduction zone volcanism may have similarly enhanced formation of early continental crust. This mechanism may have been particularly important if subduction did not initiate until 3 Ga.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
NASA Astrophysics Data System (ADS)
2001-04-01
A Window towards the Distant Universe Summary The Osservatorio Astronomico Capodimonte Deep Field (OACDF) is a multi-colour imaging survey project that is opening a new window towards the distant universe. It is conducted with the ESO Wide Field Imager (WFI) , a 67-million pixel advanced camera attached to the MPG/ESO 2.2-m telescope at the La Silla Observatory (Chile). As a pilot project at the Osservatorio Astronomico di Capodimonte (OAC) [1], the OACDF aims at providing a large photometric database for deep extragalactic studies, with important by-products for galactic and planetary research. Moreover, it also serves to gather experience in the proper and efficient handling of very large data sets, preparing for the arrival of the VLT Survey Telescope (VST) with the 1 x 1 degree 2 OmegaCam facility. PR Photo 15a/01 : Colour composite of the OACDF2 field . PR Photo 15b/01 : Interacting galaxies in the OACDF2 field. PR Photo 15c/01 : Spiral galaxy and nebulous object in the OACDF2 field. PR Photo 15d/01 : A galaxy cluster in the OACDF2 field. PR Photo 15e/01 : Another galaxy cluster in the OACDF2 field. PR Photo 15f/01 : An elliptical galaxy in the OACDF2 field. The Capodimonte Deep Field ESO PR Photo 15a/01 ESO PR Photo 15a/01 [Preview - JPEG: 400 x 426 pix - 73k] [Normal - JPEG: 800 x 851 pix - 736k] [Hi-Res - JPEG: 3000 x 3190 pix - 7.3M] Caption : This three-colour image of about 1/4 of the Capodimonte Deep Field (OACDF) was obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the la Silla Observatory. It covers "OACDF Subfield no. 2 (OACDF2)" with an area of about 35 x 32 arcmin 2 (about the size of the full moon), and it is one of the "deepest" wide-field images ever obtained. Technical information about this photo is available below. With the comparatively few large telescopes available in the world, it is not possible to study the Universe to its outmost limits in all directions. Instead, astronomers try to obtain the most detailed information possible in selected viewing directions, assuming that what they find there is representative for the Universe as a whole. This is the philosophy behind the so-called "deep-field" projects that subject small areas of the sky to intensive observations with different telescopes and methods. The astronomers determine the properties of the objects seen, as well as their distances and are then able to obtain a map of the space within the corresponding cone-of-view (the "pencil beam"). Recent, successful examples of this technique are the "Hubble Deep Field" (cf. ESO PR Photo 26/98 ) and the "Chandra Deep Field" ( ESO PR 05/01 ). In this context, the Capodimonte Deep Field (OACDF) is a pilot research project, now underway at the Osservatorio Astronomico di Capodimonte (OAC) in Napoli (Italy). It is a multi-colour imaging survey performed with the Wide Field Imager (WFI) , a 67-million pixel (8k x 8k) digital camera that is installed at the 2.2-m MPG/ESO Telescope at ESO's La Silla Observatory in Chile. The scientific goal of the OACDF is to provide an important database for subsequent extragalactic, galactic and planetary studies. It will allow the astronomers at OAC - who are involved in the VLT Survey Telescope (VST) project - to gain insight into the processing (and use) of the large data flow from a camera similar to, but four times smaller than the OmegaCam wide-field camera that will be installed at the VST. The field selection for the OACDF was based on the following criteria: * There must be no stars brighter than about 9th magnitude in the field, in order to avoid saturation of the CCD detector and effects from straylight in the telescope and camera. No Solar System planets should be near the field during the observations; * It must be located far from the Milky Way plane (at high galactic latitude) in order to reduce the number of galactic stars seen in this direction; * It must be located in the southern sky in order to optimize observing conditions (in particular, the altitude of the field above the horizon), as seen from the La Silla and Paranal sites; * There should be little interstellar material in this direction that may obscure the view towards the distant Universe; * Observations in this field should have been made with the Hubble Space Telescope (HST) that may serve for comparison and calibration purposes. Based on these criteria, the astronomers selected a field measuring about 1 x 1 deg 2 in the southern constellation of Corvus (The Raven). This is now known as the Capodimonte Deep Field (OACDF) . The above photo ( PR Photo 15a/01 ) covers one-quarter of the full field (Subfield No. 2 - OACDF2) - some of the objects seen in this area are shown below in more detail. More than 35,000 objects have been found in this area; the faintest are nearly 100 million fainter than what can be perceived with the unaided eye in the dark sky. Selected objects in the Capodimonte Deep Field ESO PR Photo 15b/01 ESO PR Photo 15b/01 [Preview - JPEG: 400 x 435 pix - 60k] [Normal - JPEG: 800 x 870 pix - 738k] [Hi-Res - JPEG: 3000 x 3261 pix - 5.1M] Caption : Enlargement of the interacting galaxies that are seen in the upper left corner of the OACDF2 field shown in PR Photo 15a/01 . The enlargement covers 1250 x 1130 WFI pixels (1 pixel = 0.24 arcsec), or about 5.0 x 4.5 arcmin 2 in the sky. The lower spiral is itself an interactive double. ESO PR Photo 15c/01 ESO PR Photo 15c/01 [Preview - JPEG: 557 x 400 pix - 93k] [Normal - JPEG: 1113 x 800 pix - 937k] [Hi-Res - JPEG: 3000 x 2156 pix - 4.0M] Caption : Enlargement of a spiral galaxy and a nebulous object in this area. The field shown covers 1250 x 750 pixels, or about 5 x 3 arcmin 2 in the sky. Note the very red objects next to the two bright stars in the lower-right corner. The colours of these objects are consistent with those of spheroidal galaxies at intermediate distances (redshifts). ESO PR Photo 15d/01 ESO PR Photo 15d/01 [Preview - JPEG: 400 x 530 pix - 68k] [Normal - JPEG: 800 x 1060 pix - 870k] [Hi-Res - JPEG: 2768 x 3668 pix - 6.2M] Caption : A further enlargement of a galaxy cluster of which most members are located in the north-east quadrant (upper left) and have a reddish colour. The nebulous object to the upper left is a dwarf galaxy of spheroidal shape. The red object, located near the centre of the field and resembling a double star, is very likely a gravitational lens [2]. Some of the very red, point-like objects in the field may be distant quasars, very-low mass stars or, possibly, relatively nearby brown dwarf stars. The field shown covers 1380 x 1630 pixels, or 5.5 x 6.5 arcmin 2. ESO PR Photo 15e/01 ESO PR Photo 15e/01 [Preview - JPEG: 400 x 418 pix - 56k] [Normal - JPEG: 800 x 835 pix - 700k] [Hi-Res - JPEG: 3000 x 3131 pix - 5.0M] Caption : Enlargement of a moderately distant galaxy cluster in the south-east quadrant (lower left) of the OACDF2 field. The field measures 1380 x 1260 pixels, or about 5.5 x 5.0 arcmin 2 in the sky. ESO PR Photo 15f/01 ESO PR Photo 15f/01 [Preview - JPEG: 449 x 400 pix - 68k] [Normal - JPEG: 897 x 800 pix - 799k] [Hi-Res - JPEG: 3000 x 2675 pix - 5.6M] Caption : Enlargement of the elliptical galaxy that is located to the west (right) in the OACDF2 field. The numerous tiny objects surrounding the galaxy may be globular clusters. The fuzzy object on the right edge of the field may be a dwarf spheroidal galaxy. The size of the field is about 6 x 5 arcmin 2. Technical Information about the OACDF Survey The observations for the OACDF project were performed in three different ESO periods (18-22 April 1999, 7-12 March 2000 and 26-30 April 2000). Some 100 Gbyte of raw data were collected during each of the three observing runs. The first OACDF run was done just after the commissioning of the ESO-WFI. The observational strategy was to perform a 1 x 1 deg 2 short-exposure ("shallow") survey and then a 0.5 x 1 deg 2 "deep" survey. The shallow survey was performed in the B, V, R and I broad-band filters. Four adjacent 30 x 30 arcmin 2 fields, together covering a 1 x 1 deg 2 field in the sky, were observed for the shallow survey. Two of these fields were chosen for the 0.5 x 1 deg 2 deep survey; OACDF2 shown above is one of these. The deep survey was performed in the B, V, R broad-bands and in other intermediate-band filters. The OACDF data are fully reduced and the catalogue extraction has started. A two-processor (500 Mhz each) DS20 machine with 100 Gbyte of hard disk, specifically acquired at the OAC for WFI data reduction, was used. The detailed guidelines of the data reduction, as well as the catalogue extraction, are reported in a research paper that will appear in the European research journal Astronomy & Astrophysics . Notes [1]: The team members are: Massimo Capaccioli, Juan M. Alcala', Roberto Silvotti, Magda Arnaboldi, Vincenzo Ripepi, Emanuella Puddu, Massimo Dall'Ora, Giuseppe Longo and Roberto Scaramella . [2]: This is a preliminary result by Juan Alcala', Massimo Capaccioli, Giuseppe Longo, Mikhail Sazhin, Roberto Silvotti and Vincenzo Testa , based on recent observations with the Telescopio Nazionale Galileo (TNG) which show that the spectra of the two objects are identical. Technical information about the photos PR Photo 15a/01 has been obtained by the combination of the B, V, and R stacked images of the OACDF2 field. The total exposure times in the three bands are 2 hours in B and V (12 ditherings of 10 min each were stacked to produce the B and V images) and 3 hours in R (13 ditherings of 15 min each). The mosaic images in the B and V bands were aligned relative to the R-band image and adjusted to a logarithmic intensity scale prior to the combination. The typical seeing was of the order of 1 arcsec in each of the three bands. Preliminary estimates of the three-sigma limiting magnitudes in B, V and R indicate 25.5, 25.0 and 25.0, respectively. More than 35,000 objects are detected above the three-sigma level. PR Photos 15b-f/01 display selected areas of the field shown in PR Photo 15a/01 at the original WFI scale, hereby also demonstrating the enormous amount of information contained in these wide-field images. In all photos, North is up and East is left.
Imaging through turbulence using a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2015-09-01
Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.
NASA Astrophysics Data System (ADS)
Davis, Brynmor J.
Fluorescence microscopy is an important and ubiquitous tool in biological imaging due to the high specificity with which fluorescent molecules can be attached to an organism and the subsequent nondestructive in-vivo imaging allowed. Focused-light microscopies allow three-dimensional fluorescence imaging but their resolution is restricted by diffraction. This effect is particularly limiting in the axial dimension as the diffraction-limited focal volume produced by a lens is more extensive along the optical axis than perpendicular to it. Approaches such as confocal microscopy and 4Pi microscopy have been developed to improve the axial resolution. Spectral Self-Interference Fluorescence Microscopy (SSFM) is another high-axial-resolution technique and is the principal subject of this dissertation. Nanometer-precision localization of a single fluorescent layer has been demonstrated using SSFM. This accuracy compares favorably with the axial resolutions given by confocal and 4Pi systems at similar operating parameters (these resolutions are approximately 350nm and 80nm respectively). This theoretical work analyzes the expected performance of the SSFM system when imaging a general object, i.e. an arbitrary fluorophore density function rather than a single layer. An existing model of SSFM is used in simulations to characterize the system's resolution. Several statistically-based reconstruction methods are applied to show that the expected resolution for SSFM is similar to 4Pi microscopy for a general object but does give very high localization accuracy when the object is known to consist of a limited number of layers. SSFM is then analyzed in a linear systems framework and shown to have strong connections, both physically and mathematically, to a multi-channel 4Pi microscope. Fourier-domain analysis confirms that SSFM cannot be expected to outperform this multi-channel 4Pi instrument. Differences between the channels in spatial-scanning, multi-channel microscopies are then exploited to show that such instruments can operate at a sub-Nyquist scanning rate but still produce images largely free of aliasing effects. Multi-channel analysis is also used to show how light typically discarded in confocal and 4Pi systems can be collected and usefully incorporated into the measured image.
DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites
NASA Astrophysics Data System (ADS)
Kalia, S.; Li, S.; Ganguly, S.; Nemani, R. R.
2017-12-01
Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remotesensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud/shadow mask from geostationary satellite data iscritical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds, which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classifycloud/shadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoder-decoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multi-spectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels.
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R
2018-01-01
Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.
Multi-angle lensless digital holography for depth resolved imaging on a chip.
Su, Ting-Wei; Isikman, Serhan O; Bishara, Waheb; Tseng, Derek; Erlinger, Anthony; Ozcan, Aydogan
2010-04-26
A multi-angle lensfree holographic imaging platform that can accurately characterize both the axial and lateral positions of cells located within multi-layered micro-channels is introduced. In this platform, lensfree digital holograms of the micro-objects on the chip are recorded at different illumination angles using partially coherent illumination. These digital holograms start to shift laterally on the sensor plane as the illumination angle of the source is tilted. Since the exact amount of this lateral shift of each object hologram can be calculated with an accuracy that beats the diffraction limit of light, the height of each cell from the substrate can be determined over a large field of view without the use of any lenses. We demonstrate the proof of concept of this multi-angle lensless imaging platform by using light emitting diodes to characterize various sized microparticles located on a chip with sub-micron axial and lateral localization over approximately 60 mm(2) field of view. Furthermore, we successfully apply this lensless imaging approach to simultaneously characterize blood samples located at multi-layered micro-channels in terms of the counts, individual thicknesses and the volumes of the cells at each layer. Because this platform does not require any lenses, lasers or other bulky optical/mechanical components, it provides a compact and high-throughput alternative to conventional approaches for cytometry and diagnostics applications involving lab on a chip systems.
Multiphoton Intravital Calcium Imaging.
Cheetham, Claire E J
2018-06-26
Multiphoton intravital calcium imaging is a powerful technique that enables high-resolution longitudinal monitoring of cellular and subcellular activity hundreds of microns deep in the living organism. This unit addresses the application of 2-photon microscopy to imaging of genetically encoded calcium indicators (GECIs) in the mouse brain. The protocols in this unit enable real-time intravital imaging of intracellular calcium concentration simultaneously in hundreds of neurons, or at the resolution of single synapses, as mice respond to sensory stimuli or perform behavioral tasks. Protocols are presented for implantation of a cranial imaging window to provide optical access to the brain and for 2-photon image acquisition. Protocols for implantation of both open skull and thinned skull windows for single or multi-session imaging are described. © 2018 by John Wiley & Sons, Inc. © 2018 John Wiley & Sons, Inc.
NASA Technical Reports Server (NTRS)
2005-01-01
This image shows NASA's Deep Impact impactor spacecraft while it was being built at Ball Aerospace & Technologies Corporation, Boulder, Colo. On July 2, at 10:52 p.m. Pacific time (1:52 a.m. Eastern time, July 3), the impactor will be released from Deep Impact's flyby spacecraft. One day later, it will collide with Tempel 1. The impactor cannot directly talk to Earth, so it will communicate via the flyby spacecraft during its final day. The two spacecraft communicate at 'S-band' frequency. The impactor's S-band antenna is the rectangle-shaped object seen on the top of the impactor in this image.IDL Object Oriented Software for Hinode/XRT Image Analysis
NASA Astrophysics Data System (ADS)
Higgins, P. A.; Gallagher, P. T.
2008-09-01
We have developed a set of object oriented IDL routines that enable users to search, download and analyse images from the X-Ray Telescope (XRT) on-board Hinode. In this paper, we give specific examples of how the object can be used and how multi-instrument data analysis can be performed. The XRT object is a highly versatile and powerful IDL object, which will prove to be a useful tool for solar researchers. This software utilizes the generic Framework object available within the GEN branch of SolarSoft.
HDU Deep Space Habitat (DSH) Overview
NASA Technical Reports Server (NTRS)
Kennedy, Kriss J.
2011-01-01
This paper gives an overview of the National Aeronautics and Space Administration (NASA) led multi-center Habitat Demonstration Unit (HDU) project Deep Space Habitat (DSH) analog that will be field-tested during the 2011 Desert Research and Technologies Studies (D-RATS) field tests. The HDU project is a technology pull project that integrates technologies and innovations from multiple NASA centers. This project will repurpose the HDU Pressurized Excursion Module (PEM) that was field tested in the 2010 D-RATS, adding habitation functionality to the prototype unit. The 2010 configuration of the HDU-PEM consisted of a lunar surface laboratory module that was used to bring over 20 habitation-related technologies together in a single platform that could be tested as an advanced habitation analog in the context of mission architectures and surface operations. The 2011 HDU-DSH configuration will build upon the PEM work, and emphasize validity of crew operations (habitation and living, etc), EVA operations, mission operations, logistics operations, and science operations that might be required in a deep space context for Near Earth Object (NEO) exploration mission architectures. The HDU project consists of a multi-center team brought together in a skunkworks approach to quickly build and validate hardware in analog environments. The HDU project is part of the strategic plan from the Exploration Systems Mission Directorate (ESMD) Directorate Integration Office (DIO) and the Exploration Mission Systems Office (EMSO) to test destination elements in analog environments. The 2011 analog field test will include Multi Mission Space Exploration Vehicles (MMSEV) and the DSH among other demonstration elements to be brought together in a mission architecture context. This paper will describe overall objectives, various habitat configurations, strategic plan, and technology integration as it pertains to the 2011 field tests.
Quicksilver: Fast predictive image registration - A deep learning approach.
Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc
2017-09-01
This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.
Spectroscopic Surveys with the ELT: A Gigantic Step into the Deep Universe
NASA Astrophysics Data System (ADS)
Evans, C.; Puech, M.; Hammer, F.; Gallego, J.; Sánchez, A.; García, L.; Iglesias, J.
2018-03-01
The Phase A design of MOSAIC, a powerful multi-object spectrograph intended for ESO's Extremely Large Telescope, concluded in late 2017. With the design complete, a three-day workshop was held last October in Toledo to discuss the breakthrough spectroscopic surveys that MOSAIC can deliver across a broad range of contemporary astronomy.
Song, Qi; Chen, Mingqing; Bai, Junjie; Sonka, Milan; Wu, Xiaodong
2011-01-01
Multi-object segmentation with mutual interaction is a challenging task in medical image analysis. We report a novel solution to a segmentation problem, in which target objects of arbitrary shape mutually interact with terrain-like surfaces, which widely exists in the medical imaging field. The approach incorporates context information used during simultaneous segmentation of multiple objects. The object-surface interaction information is encoded by adding weighted inter-graph arcs to our graph model. A globally optimal solution is achieved by solving a single maximum flow problem in a low-order polynomial time. The performance of the method was evaluated in robust delineation of lung tumors in megavoltage cone-beam CT images in comparison with an expert-defined independent standard. The evaluation showed that our method generated highly accurate tumor segmentations. Compared with the conventional graph-cut method, our new approach provided significantly better results (p < 0.001). The Dice coefficient obtained by the conventional graph-cut approach (0.76 +/- 0.10) was improved to 0.84 +/- 0.05 when employing our new method for pulmonary tumor segmentation.
Tensor-based dynamic reconstruction method for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.
2017-03-01
Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.
VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images.
Chen, Hao; Dou, Qi; Yu, Lequan; Qin, Jing; Heng, Pheng-Ann
2018-04-15
Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time-consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network (VoxResNet) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand-crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi-modality and multi-level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto-context version of the VoxResNet is proposed by combining the low-level image appearance features, implicit shape information, and high-level context together for further improving the segmentation performance. Extensive experiments on the well-known benchmark (i.e., MRBrainS) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet. Our method achieved the first place in the challenge out of 37 competitors including several state-of-the-art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain-related studies, where accurate segmentation of brain structures is critical. Copyright © 2017 Elsevier Inc. All rights reserved.
Deep Boreholes Seals Subjected to High P, T conditions – Preliminary Experimental Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caporuscio, Florie Andre; Norskog, Katherine Elizabeth; Maner, James Lavada
The objective of this planned experimental work is to evaluate physio-chemical processes for ‘seal’ components and materials relevant to deep borehole disposal. These evaluations will encompass multi-laboratory efforts for the development of seals concepts and application of Thermal-Mechanical-Chemical (TMC) modeling work to assess barrier material interactions with subsurface fluids, their stability at high temperatures, and the implications of these processes to the evaluation of thermal limits. Deep borehole experimental work will constrain the Pressure, Temperature (P, T) conditions which “seal” material will experience in deep borehole crystalline rock repositories. The rocks of interest to this study include the silicic (graniticmore » gneiss) end members. The experiments will systematically add components to capture discrete changes in both water and EBS component chemistries.« less
NASA Technical Reports Server (NTRS)
Jeong, Myeong-Jae; Hsu, N. Christina; Kwiatkowska, Ewa J.; Franz, Bryan A.; Meister, Gerhard; Salustro, Clare E.
2012-01-01
The retrieval of aerosol properties from spaceborne sensors requires highly accurate and precise radiometric measurements, thus placing stringent requirements on sensor calibration and characterization. For the Terra/Moderate Resolution Imaging Spedroradiometer (MODIS), the characteristics of the detectors of certain bands, particularly band 8 [(B8); 412 nm], have changed significantly over time, leading to increased calibration uncertainty. In this paper, we explore a possibility of utilizing a cross-calibration method developed for characterizing the Terral MODIS detectors in the ocean bands by the National Aeronautics and Space Administration Ocean Biology Processing Group to improve aerosol retrieval over bright land surfaces. We found that the Terra/MODIS B8 reflectance corrected using the cross calibration method resulted in significant improvements for the retrieved aerosol optical thickness when compared with that from the Multi-angle Imaging Spectroradiometer, Aqua/MODIS, and the Aerosol Robotic Network. The method reported in this paper is implemented for the operational processing of the Terra/MODIS Deep Blue aerosol products.
Multi-energy method of digital radiography for imaging of biological objects
NASA Astrophysics Data System (ADS)
Ryzhikov, V. D.; Naydenov, S. V.; Opolonin, O. D.; Volkov, V. G.; Smith, C. F.
2016-03-01
This work has been dedicated to the search for a new possibility to use multi-energy digital radiography (MER) for medical applications. Our work has included both theoretical and experimental investigations of 2-energy (2E) and 3- energy (3D) radiography for imaging the structure of biological objects. Using special simulation methods and digital analysis based on the X-ray interaction energy dependence for each element of importance to medical applications in the X-ray range of energy up to 150 keV, we have implemented a quasi-linear approximation for the energy dependence of the X-ray linear mass absorption coefficient μm (E) that permits us to determine the intrinsic structure of the biological objects. Our measurements utilize multiple X-ray tube voltages (50, 100, and 150 kV) with Al and Cu filters of different thicknesses to achieve 3-energy X-ray examination of objects. By doing so, we are able to achieve significantly improved imaging quality of the structure of the subject biological objects. To reconstruct and visualize the final images, we use both two-dimensional (2D) and three-dimensional (3D) palettes of identification. The result is a 2E and/or 3E representation of the object with color coding of each pixel according to the data outputs. Following the experimental measurements and post-processing, we produce a 3D image of the biological object - in the case of our trials, fragments or parts of chicken and turkey.
Discovery of Compact Quiescent Galaxies at Intermediate Redshifts in DEEP2
NASA Astrophysics Data System (ADS)
Blancato, Kirsten; Chilingarian, Igor; Damjanov, Ivana; Moran, Sean; Katkov, Ivan
2015-01-01
Compact quiescent galaxies in the redshift range 0.6 < z < 1.1 are the missing link needed to complete the evolutionary histories of these objects from the high redshift z ≥ 2 Universe to the local z ~ 0 Universe. We identify the first intermediate redshift compact quiescent galaxies by searching a sample of 1,089 objects in the DEEP2 Redshift Survey that have multi-band photometry, spectral fitting, and readily available structural parameters. We find 27 compact quiescent candidates between z = 0.6 and z = 1.1 where each candidate galaxy has archival Hubble Space Telescope (HST) imaging and is visually confirmed to be early-type. The candidates have half-light radii ranging from 0.83 < Re,c < 7.14 kpc (median Re,c = 1.77 kpc) and virial masses ranging from 2.2E10 < Mdyn < 5.6E11 Msun (median Mdyn = 7.7E10 Msun). Of our 27 compact quiescent candidates, 13 are truly compact with sizes at most half of the size of their z ~ 0 counterparts of the same mass. In addition to their structural properties bridging the gap between their high and low redshift counterparts, our sample of intermediate redshift quiescent galaxies span a large range of ages but is drawn from two distinct epochs of galaxy formation: formation at z > 2 which suggests these objects may be the relics of the observed high redshift compact galaxies and formation at z ≤ 2 which suggests there is an additional population of more recently formed massive compact galaxies. This work is supported in part by the NSF REU and DOD ASSURE programs under NSF grant no. 1262851 and by the Smithsonian Institution.
Astrometry with LSST: Objectives and Challenges
NASA Astrophysics Data System (ADS)
Casetti-Dinescu, D. I.; Girard, T. M.; Méndez, R. A.; Petronchak, R. M.
2018-01-01
The forthcoming Large Synoptic Survey Telescope (LSST) is an optical telescope with an effective aperture of 6.4 m, and a field of view of 9.6 square degrees. Thus, LSST will have an étendue larger than any other optical telescope, performing wide-field, deep imaging of the sky. There are four broad categories of science objectives: 1) dark-energy and dark matter, 2) transients, 3) the Milky Way and its neighbours and, 4) the Solar System. In particular, for the Milky-Way science case, astrometry will make a critical contribution; therefore, special attention must be devoted to extract the maximum amount of astrometric information from the LSST data. Here, we outline the astrometric challenges posed by such a massive survey. We also present some current examples of ground-based, wide-field, deep imagers used for astrometry, as precursors of the LSST.
The Large UV/Optical/Infrared Surveyor (LUVOIR): Decadal Mission concept design update
NASA Astrophysics Data System (ADS)
Bolcar, Matthew R.; Aloezos, Steve; Bly, Vincent T.; Collins, Christine; Crooke, Julie; Dressing, Courtney D.; Fantano, Lou; Feinberg, Lee D.; France, Kevin; Gochar, Gene; Gong, Qian; Hylan, Jason E.; Jones, Andrew; Linares, Irving; Postman, Marc; Pueyo, Laurent; Roberge, Aki; Sacks, Lia; Tompkins, Steven; West, Garrett
2017-09-01
In preparation for the 2020 Astrophysics Decadal Survey, NASA has commissioned the study of four large mission concepts, including the Large Ultraviolet / Optical / Infrared (LUVOIR) Surveyor. The LUVOIR Science and Technology Definition Team (STDT) has identified a broad range of science objectives including the direct imaging and spectral characterization of habitable exoplanets around sun-like stars, the study of galaxy formation and evolution, the epoch of reionization, star and planet formation, and the remote sensing of Solar System bodies. NASA's Goddard Space Flight Center (GSFC) is providing the design and engineering support to develop executable and feasible mission concepts that are capable of the identified science objectives. We present an update on the first of two architectures being studied: a 15- meter-diameter segmented-aperture telescope with a suite of serviceable instruments operating over a range of wavelengths between 100 nm to 2.5 μm. Four instruments are being developed for this architecture: an optical / near-infrared coronagraph capable of 10-10 contrast at inner working angles as small as 2 λ/D the LUVOIR UV Multi-object Spectrograph (LUMOS), which will provide low- and medium-resolution UV (100 - 400 nm) multi-object imaging spectroscopy in addition to far-UV imaging; the High Definition Imager (HDI), a high-resolution wide-field-of-view NUV-Optical-IR imager; and a UV spectro-polarimeter being contributed by Centre National d'Etudes Spatiales (CNES). A fifth instrument, a multi-resolution optical-NIR spectrograph, is planned as part of a second architecture to be studied in late 2017.
NASA Astrophysics Data System (ADS)
Chen, Shaojie; Sivanandam, Suresh; Moon, Dae-Sik
2016-08-01
We discuss the optical design of an infrared multi-object spectrograph (MOS) concept that is designed to take advantage of the multi-conjugate adaptive optics (MCAO) corrected field at the Gemini South telescope. This design employs a unique, cryogenic MEMS-based focal plane mask to select target objects for spectroscopy by utilizing the Micro-Shutter Array (MSA) technology originally developed for the Near Infrared Spectrometer (NIRSpec) of the James Webb Space Telescope (JWST). The optical design is based on all spherical refractive optics, which serves both imaging and spectroscopic modes across the wavelength range of 0.9-2.5 μm. The optical system consists of a reimaging system, MSA, collimator, volume phase holographic (VPH) grisms, and spectrograph camera optics. The VPH grisms, which are VPH gratings sandwiched between two prisms, provide high dispersing efficiencies, and a set of several VPH grisms provide the broad spectral coverage at high throughputs. The imaging mode is implemented by removing the MSA and the dispersing unit out of the beam. We optimize both the imaging and spectrographic modes simultaneously, while paying special attention to the performance of the pupil imaging at the cold stop. Our current design provides a 1' ♢ 1' and a 0.5' ♢ 1' field of views for imaging and spectroscopic modes, respectively, on a 2048 × 2048 pixel HAWAII-2RG detector array. The spectrograph's slit width and spectral resolving power are 0.18'' and 3,000, respectively, and spectra of up to 100 objects can be obtained simultaneously. We present the overall results of simulated performance using optical model we designed.
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
NASA Astrophysics Data System (ADS)
Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.
2014-12-01
Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep rupture off Fukushima at 90-135 s. The dominant-period difference of the seismic-wave radiation between two deep ruptures off Miyagi may result from the mechanism that small-scale heterogeneities on the fault are removed by the first rupture. This difference can be also interpreted by the concept of multi-scale dynamic rupture (Ide & Aochi, 2005).
Multispectral THz-VIS passive imaging system for hidden threats visualization
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Palka, Norbert; Szustakowski, Mieczyslaw
2013-10-01
Terahertz imaging, is the latest entry into the crowded field of imaging technologies. Many applications are emerging for the relatively new technology. THz radiation penetrates deep into nonpolar and nonmetallic materials such as paper, plastic, clothes, wood, and ceramics that are usually opaque at optical wavelengths. The T-rays have large potential in the field of hidden objects detection because it is not harmful to humans. The main difficulty in the THz imaging systems is low image quality thus it is justified to combine THz images with the high-resolution images from a visible camera. An imaging system is usually composed of various subsystems. Many of the imaging systems use imaging devices working in various spectral ranges. Our goal is to build a system harmless to humans for screening and detection of hidden objects using a THz and VIS cameras.
NIFTE: The Near Infrared Faint-Object Telescope Experiment
NASA Technical Reports Server (NTRS)
Bock, James J.; Lange, Andrew E.; Matsumoto, T.; Eisenhardt, Peter B.; Hacking, Perry B.; Schember, Helene R.
1994-01-01
The high sensitivity of large format InSb arrays can be used to obtain deep images of the sky at 3-5 micrometers. In this spectral range cool or highly redshifted objects (e.g. brown dwarfs and protogalaxies) which are not visible at shorter wavelengths may be observed. Sensitivity at these wavelengths in ground-based observations is severly limited by the thermal flux from the telescope and from the earth's atmosphere. The Near Infrared Faint-Object Telescope Experiment (NIFTE), a 50 cm cooled rocket-borne telescope combined with large format, high performance InSb arrays, can reach a limiting flux less than 1 micro-Jy(1-sigma) over a large field-of-view in a single flight. In comparison, the Infrared Space Observatory (ISO) will require days of observation to reach a sensitivity more than one order of magnitude worse over a similar area of the sky. The deep 3-5 micrometer images obtained by the rocket-borne telescope will assist in determining the nature of faint red objects detected by ground-based telescopes at 2 micrometers, and by ISO at wavelengths longer than 5 micrometers.
NASA Technical Reports Server (NTRS)
Wilson, K.; Parvin, B.; Fugate, R.; Kervin, P.; Zingales, S.
2003-01-01
Future NASA deep space missions will fly advanced high resolution imaging instruments that will require high bandwidth links to return the huge data volumes generated by these instruments. Optical communications is a key technology for returning these large data volumes from deep space probes. Yet to cost effectively realize the high bandwidth potential of the optical link will require deployment of ground receivers in diverse locations to provide high link availability. A recent analysis of GOES weather satellite data showed that a network of ground stations located in Hawaii and the Southwest continental US can provide an average of 90% availability for the deep space optical link. JPL and AFRL are exploring the use of large telescopes in Hawaii, California, and Albuquerque to support the Mars Telesat laser communications demonstration. Designed to demonstrate multi-Mbps communications from Mars, the mission will investigate key operational strategies of future deep space optical communications network.
Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.
Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping
2018-03-23
Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias
2018-05-16
There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.
Friend or foe: exploiting sensor failures for transparent object localization and classification
NASA Astrophysics Data System (ADS)
Seib, Viktor; Barthen, Andreas; Marohn, Philipp; Paulus, Dietrich
2017-02-01
In this work we address the problem of detecting and recognizing transparent objects using depth images from an RGB-D camera. Using this type of sensor usually prohibits the localization of transparent objects since the structured light pattern of these cameras is not reflected by transparent surfaces. Instead, transparent surfaces often appear as undefined values in the resulting images. However, these erroneous sensor readings form characteristic patterns that we exploit in the presented approach. The sensor data is fed into a deep convolutional neural network that is trained to classify and localize drinking glasses. We evaluate our approach with four different types of transparent objects. To our best knowledge, no datasets offering depth images of transparent objects exist so far. With this work we aim at closing this gap by providing our data to the public.
Sunspot drawings handwritten character recognition method based on deep learning
NASA Astrophysics Data System (ADS)
Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li
2016-05-01
High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.
Deep learning methods for CT image-domain metal artifact reduction
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge
2017-09-01
Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.
NASA Astrophysics Data System (ADS)
Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup
2017-06-01
This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.
Binary Detection using Multi-Hypothesis Log-Likelihood, Image Processing
2014-03-27
geosynchronous orbit and other scenarios important to the USAF. 2 1.3 Research objectives The question posed in this thesis is how well, if at all, can a...is important to compare them to another modern technique. The third objective is to compare results from another image detection method, specifically...Although adaptive optics is an important technique in moving closer to diffraction limited imaging, it is not currently a practical solution for all
Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.
Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng-Ann
2017-04-01
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
The Hubble Deep UV Legacy Survey (HDUV): Survey Overview and First Results
NASA Astrophysics Data System (ADS)
Oesch, Pascal; Montes, Mireia; HDUV Survey Team
2015-08-01
Deep HST imaging has shown that the overall star formation density and UV light density at z>3 is dominated by faint, blue galaxies. Remarkably, very little is known about the equivalent galaxy population at lower redshifts. Understanding how these galaxies evolve across the epoch of peak cosmic star-formation is key to a complete picture of galaxy evolution. Here, we present a new HST WFC3/UVIS program, the Hubble Deep UV (HDUV) legacy survey. The HDUV is a 132 orbit program to obtain deep imaging in two filters (F275W and F336W) over the two CANDELS Deep fields. We will cover ~100 arcmin2, reaching down to 27.5-28.0 mag at 5 sigma. By directly sampling the rest-frame far-UV at z>~0.5, this will provide a unique legacy dataset with exquisite HST multi-wavelength imaging as well as ancillary HST grism NIR spectroscopy for a detailed study of faint, star-forming galaxies at z~0.5-2. The HDUV will enable a wealth of research by the community, which includes tracing the evolution of the FUV luminosity function over the peak of the star formation rate density from z~3 down to z~0.5, measuring the physical properties of sub-L* galaxies, and characterizing resolved stellar populations to decipher the build-up of the Hubble sequence from sub-galactic clumps. This poster provides an overview of the HDUV survey and presents the reduced data products and catalogs which will be released to the community.
Deep Radio Imaging with MERLIN of the Supernova Remnants in M82
NASA Astrophysics Data System (ADS)
Muxlow, T. W. B.; Pedlar, A.; Riley, J. D.; McDonald, A. R.; Beswick, R. J.; Wills, K. A.
An 8 day MERLIN deep integration at 5GHz of the central region of the starburst galaxy M82 has been used to investigate the radio structure of a number of supernova remnants in unprecedented detail revealing new shells and partial shell structures for the first time. In addition, by comparing the new deep 2002 image with an astrometrically aligned image from 36 hours of data taken in 1992, it has been possible to directly measure the expansion velocities of 4 of the most compact remnants in M82. For the two most compact remnants, 41.95+575 and 43.31+592, expansion velocities of 2800 ± 300 km s-1 and 8750 ± 400 km s-1 have been derived. These confirm and refine the measured expansion velocities which have been derived from VLBI multi-epoch studies. For remnants 43.18+583 and 44.01+596, expansion velocities of 10500 ± 750 km s -1 and 2400 ± 250 km s-1 have been measured for the first time. In addition, the peak of the radio emission for SNR 45.17+612 has moved between the two epochs implying velocities around 7500km s-1. The relatively compact remnants in M82 are thus found to be expanding over a wide range of velocities which appear unrelated to their size. The new 2002 map is the most sensitive high-resolution image yet made of M82, achieving an rms noise level of 17µJy beam-1. This establishes a first epoch for subsequent deep studies of expansion velocities for many SNR within M82.
Design and evaluation of an ultra-slim objective for in-vivo deep optical biopsy
Landau, Sara M.; Liang, Chen; Kester, Robert T.; Tkaczyk, Tomasz S.; Descour, Michael R.
2010-01-01
An estimated 1.6 million breast biopsies are performed in the US each year. In order to provide real-time, in-vivo imaging with sub-cellular resolution for optical biopsies, we have designed an ultra-slim objective to fit inside the 1-mm-diameter hypodermic needles currently used for breast biopsies to image tissue stained by the fluorescent probe proflavine. To ensure high-quality imaging performance, experimental tests were performed to characterize fiber bundle’s light-coupling efficiency and simulations were performed to evaluate the impact of candidate lens materials’ autofluorescence. A prototype of NA = 0.4, 250-µm field of view, ultra-slim objective optics was built and tested, yielding diffraction-limited performance and estimated resolution of 0.9 µm. When used in conjunction with a commercial coherent fiber bundle to relay the image formed by the objective, the measured resolution was 2.5 µm. PMID:20389489
Efficient generation of image chips for training deep learning algorithms
NASA Astrophysics Data System (ADS)
Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd
2017-05-01
Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with the simulated images, especially when obtaining sufficient real data was particularly challenging.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2010-07-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27 × 27) mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4 × 4) imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5 × 0.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support. Over the past two years the LBC and the first LUCIFER instrument have been brought into routine scientific operation and MODS1 commissioning is set to begin in the fall of 2010.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Fly Eye radar: detection through high scattered media
NASA Astrophysics Data System (ADS)
Molchanov, Pavlo; Gorwara, Ashok
2017-05-01
Longer radio frequency waves better penetrating through high scattered media than millimeter waves, but imaging resolution limited by diffraction at longer wavelength. Same time frequency and amplitudes of diffracted waves (frequency domain measurement) provides information of object. Phase shift of diffracted waves (phase front in time domain) consists information about shape of object and can be applied for reconstruction of object shape or even image by recording of multi-frequency digital hologram. Spectrum signature or refracted waves allows identify the object content. Application of monopulse method with overlap closely spaced antenna patterns provides high accuracy measurement of amplitude, phase, and direction to signal source. Digitizing of received signals separately in each antenna relative to processor time provides phase/frequency independence. Fly eye non-scanning multi-frequency radar system provides simultaneous continuous observation of multiple targets and wide possibilities for stepped frequency, simultaneous frequency, chaotic frequency sweeping waveform (CFS), polarization modulation for reliable object detection. Proposed c-band fly eye radar demonstrated human detection through 40 cm concrete brick wall with human and wall material spectrum signatures and can be applied for through wall human detection, landmines, improvised explosive devices detection, underground or camouflaged object imaging.
PdBI cold dust imaging of two extremely red H – [4.5] > 4 galaxies discovered with SEDS and CANDELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caputi, K. I.; Popping, G.; Spaans, M.
2014-06-20
We report Plateau de Bure Interferometer (PdBI) 1.1 mm continuum imaging toward two extremely red H – [4.5] > 4 (AB) galaxies at z > 3, which we have previously discovered making use of Spitzer SEDS and Hubble Space Telescope CANDELS ultra-deep images of the Ultra Deep Survey field. One of our objects is detected on the PdBI map with a 4.3σ significance, corresponding to S{sub ν}(1.1 mm)=0.78±0.18 mJy. By combining this detection with the Spitzer 8 and 24 μm photometry for this source, and SCUBA2 flux density upper limits, we infer that this galaxy is a composite active galacticmore » nucleus/star-forming system. The infrared (IR)-derived star formation rate is SFR ≈ 200 ± 100 M {sub ☉} yr{sup –1}, which implies that this galaxy is a higher-redshift analogue of the ordinary ultra-luminous infrared galaxies more commonly found at z ∼ 2-3. In the field of the other target, we find a tentative 3.1σ detection on the PdBI 1.1 mm map, but 3.7 arcsec away of our target position, so it likely corresponds to a different object. In spite of the lower significance, the PdBI detection is supported by a close SCUBA2 3.3σ detection. No counterpart is found on either the deep SEDS or CANDELS maps, so, if real, the PdBI source could be similar in nature to the submillimeter source GN10. We conclude that the analysis of ultra-deep near- and mid-IR images offers an efficient, alternative route to discover new sites of powerful star formation activity at high redshifts.« less
Davidson, Benjamin; Kalitzeos, Angelos; Carroll, Joseph; Dubra, Alfredo; Ourselin, Sebastien; Michaelides, Michel; Bergeles, Christos
2018-05-21
We present a robust deep learning framework for the automatic localisation of cone photoreceptor cells in Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) split-detection images. Monitoring cone photoreceptors with AOSLO imaging grants an excellent view into retinal structure and health, provides new perspectives into well known pathologies, and allows clinicians to monitor the effectiveness of experimental treatments. The MultiDimensional Recurrent Neural Network (MDRNN) approach developed in this paper is the first method capable of reliably and automatically identifying cones in both healthy retinas and retinas afflicted with Stargardt disease. Therefore, it represents a leap forward in the computational image processing of AOSLO images, and can provide clinical support in on-going longitudinal studies of disease progression and therapy. We validate our method using images from healthy subjects and subjects with the inherited retinal pathology Stargardt disease, which significantly alters image quality and cone density. We conduct a thorough comparison of our method with current state-of-the-art methods, and demonstrate that the proposed approach is both more accurate and appreciably faster in localizing cones. As further validation to the method's robustness, we demonstrate it can be successfully applied to images of retinas with pathologies not present in the training data: achromatopsia, and retinitis pigmentosa.
NASA Astrophysics Data System (ADS)
Marchesini, Danilo
2015-10-01
We propose to construct public multi-wavelength and value-added catalogs for the HST Frontier Fields (HFF), a multi-cycle imaging program of 6 deep fields centered on strong lensing galaxy clusters and 6 deep blank fields. Whereas the main goal of the HFF is to explore the first billion years of galaxy evolution, this dataset has a unique combination of area and depth that will propel forward our knowledge of galaxy evolution down to and including the foreground cluster redshift (z=0.3-0.5). However, such scientific exploitation requires high-quality, homogeneous, multi-wavelength (from the UV to the mid-infrared) photometric catalogs, supplemented by photometric redshifts, rest-frame colors and luminosities, stellar masses, star-formation rates, and structural parameters. We will use our expertise and existing infrastructure - created for the 3D-HST and CANDELS projects - to build such a data product for the 12 fields of the HFF, using all available imaging data (from HST, Spitzer, and ground-based facilities) as well as all available HST grism data (e.g., GLASS). A broad range of research topics will benefit from such a public database, including but not limited to the faint end of the cluster mass function, the field mass function at z>2, and the build-up of the quiescent population at z>4. In addition, our work will provide an essential basis for follow-up studies and future planning with, for example, ALMA and JWST.
Positron emission imaging device and method of using the same
Bingham, Philip R.; Mullens, James Allen
2013-01-15
An imaging system and method of imaging are disclosed. The imaging system can include an external radiation source producing pairs of substantially simultaneous radiation emissions of a picturization emission and a verification emissions at an emission angle. The imaging system can also include a plurality of picturization sensors and at least one verification sensor for detecting the picturization and verification emissions, respectively. The imaging system also includes an object stage is arranged such that a picturization emission can pass through an object supported on said object stage before being detected by one of said plurality of picturization sensors. A coincidence system and a reconstruction system can also be included. The coincidence can receive information from the picturization and verification sensors and determine whether a detected picturization emission is direct radiation or scattered radiation. The reconstruction system can produce a multi-dimensional representation of an object imaged with the imaging system.
2006-10-19
This image shows NASA Deep Impact spacecraft being built at Ball Aerospace & Technologies Corporation, Boulder, Colo. On July 2, 2005. The impactor S-band antenna is the rectangle-shaped object seen on the top of the impactor.
VizieR Online Data Catalog: AKARI NEP Survey sources at 18um (Pearson+, 2014)
NASA Astrophysics Data System (ADS)
Pearson, C. P.; Serjeant, S.; Oyabu, S.; Matsuhara, H.; Wada, T.; Goto, T.; Takagi, T.; Lee, H. M.; Im, M.; Ohyama, Y.; Kim, S. J.; Murata, K.
2015-04-01
The NEP-Deep survey at 18u in the IRC-L18W band is constructed from a total of 87 individual pointed observations taken between May 2006 to August 2007, using the IRC Astronomical Observing Template (AOT) designed for deep observations (IRC05), with approximately 2500 second exposures per IRC filter in all mid-infrared bands. The deep imaging IRC05 AOT has no explicit dithering built into the AOT operation, therefore dithering is achieved by layering separate pointed observations on at least three positions on a given piece of sky. The NEP-Wide survey consists of 446 pointed observations with .300 second exposures for each filter. The NEP-Wide survey uses the shallower IRC03 AOT optimized for large area multi-band mapping with the dithering included within the AOT. Note that for both surveys, although images are taken simultaneously in all three IRC channels, the target area of sky in the MIR-L channel is offset from the corresponding area of sky in the NIR/MIR-S channel by ~20arcmin. (2 data files).
A Robust Deep Model for Improved Classification of AD/MCI Patients
Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang
2015-01-01
Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V.; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R.
2018-01-01
Background: Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. Methods: In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. Results: The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. Conclusions: The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods. PMID:29619277
NASA Astrophysics Data System (ADS)
Shipley, Heath V.; Lange-Vagle, Daniel; Marchesini, Danilo; Brammer, Gabriel B.; Ferrarese, Laura; Stefanon, Mauro; Kado-Fong, Erin; Whitaker, Katherine E.; Oesch, Pascal A.; Feinstein, Adina D.; Labbé, Ivo; Lundgren, Britt; Martis, Nicholas; Muzzin, Adam; Nedkova, Kalina; Skelton, Rosalind; van der Wel, Arjen
2018-03-01
We present Hubble multi-wavelength photometric catalogs, including (up to) 17 filters with the Advanced Camera for Surveys and Wide Field Camera 3 from the ultra-violet to near-infrared for the Hubble Frontier Fields and associated parallels. We have constructed homogeneous photometric catalogs for all six clusters and their parallels. To further expand these data catalogs, we have added ultra-deep K S -band imaging at 2.2 μm from the Very Large Telescope HAWK-I and Keck-I MOSFIRE instruments. We also add post-cryogenic Spitzer imaging at 3.6 and 4.5 μm with the Infrared Array Camera (IRAC), as well as archival IRAC 5.8 and 8.0 μm imaging when available. We introduce the public release of the multi-wavelength (0.2–8 μm) photometric catalogs, and we describe the unique steps applied for the construction of these catalogs. Particular emphasis is given to the source detection band, the contamination of light from the bright cluster galaxies (bCGs), and intra-cluster light (ICL). In addition to the photometric catalogs, we provide catalogs of photometric redshifts and stellar population properties. Furthermore, this includes all the images used in the construction of the catalogs, including the combined models of bCGs and ICL, the residual images, segmentation maps, and more. These catalogs are a robust data set of the Hubble Frontier Fields and will be an important aid in designing future surveys, as well as planning follow-up programs with current and future observatories to answer key questions remaining about first light, reionization, the assembly of galaxies, and many more topics, most notably by identifying high-redshift sources to target.
Confidence level estimation in multi-target classification problems
NASA Astrophysics Data System (ADS)
Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia
2018-04-01
This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.
Wirth, Wolfgang; Maschek, Susanne; Eckstein, Felix
2016-01-01
SUMMARY Compositional measures of articular cartilage are accessible in vivo by magnetic resonance imaging (MRI) based relaxometry and cartilage spin-spin transverse relaxation time (T2) has been related to tissue hydration, collagen content and orientation, and mechanical (functional) properties of articular cartilage. The objective of the current study was therefore to evaluate subregional variation, and sex- and age-differences, in laminar (deep and superficial) femorotibial cartilage T2 relaxation time in healthy adults. To this end, we studied the right knees of 92 healthy subjects from the Osteoarthritis Initiative reference cohort (55 women, 37 men; age range 45–78 years; BMI 24.4±3.1) without knee pain, radiographic signs, or risk factors of knee osteoarthritis in either knee. T2 of the deep and superficial femorotibial cartilages was determined in 16 femorotibial subregions, using a multi-echo spin-echo (MESE) MRI sequence. Significant subregional variation in femorotibial cartilage T2 was observed for the superficial and for the deep (both p<0.001) cartilage layer (Friedman test). Yet, layer- and region-specific femorotibial T2 did not differ between men and women, or between healthy adults below and above the median age (54y). In conclusion, this first study to report subregional (layer-specific) compositional variation of femorotibial cartilage T2 in healthy adults identifies significant differences in both superficial and deep cartilage T2 between femorotibial subregions. However, no relevant sex- or age-dependence of cartilage T2 was observed between age 45–78y. The findings suggest that a common, non-sex-specific set of layer-and region-specific T2 reference values can be used to identify compositional pathology in joint disease for this age group. PMID:27836800
Multi-level deep supervised networks for retinal vessel segmentation.
Mo, Juan; Zhang, Lei
2017-12-01
Changes in the appearance of retinal blood vessels are an important indicator for various ophthalmologic and cardiovascular diseases, including diabetes, hypertension, arteriosclerosis, and choroidal neovascularization. Vessel segmentation from retinal images is very challenging because of low blood vessel contrast, intricate vessel topology, and the presence of pathologies such as microaneurysms and hemorrhages. To overcome these challenges, we propose a neural network-based method for vessel segmentation. A deep supervised fully convolutional network is developed by leveraging multi-level hierarchical features of the deep networks. To improve the discriminative capability of features in lower layers of the deep network and guide the gradient back propagation to overcome gradient vanishing, deep supervision with auxiliary classifiers is incorporated in some intermediate layers of the network. Moreover, the transferred knowledge learned from other domains is used to alleviate the issue of insufficient medical training data. The proposed approach does not rely on hand-crafted features and needs no problem-specific preprocessing or postprocessing, which reduces the impact of subjective factors. We evaluate the proposed method on three publicly available databases, the DRIVE, STARE, and CHASE_DB1 databases. Extensive experiments demonstrate that our approach achieves better or comparable performance to state-of-the-art methods with a much faster processing speed, making it suitable for real-world clinical applications. The results of cross-training experiments demonstrate its robustness with respect to the training set. The proposed approach segments retinal vessels accurately with a much faster processing speed and can be easily applied to other biomedical segmentation tasks.
Image degradation characteristics and restoration based on regularization for diffractive imaging
NASA Astrophysics Data System (ADS)
Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun
2017-11-01
The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.
International Deep Planet Survey, 317 stars to determine the wide-separated planet frequency
NASA Astrophysics Data System (ADS)
Galicher, R.; Marois, C.; Macintosh, B.; Zuckerman, B.; Song, I.; Barman, T.; Patience, J.
2013-09-01
Since 2000, more than 300 nearby young stars were observed for the International Deep Planet Survey with adaptive optics systems at Gemini (NIRI/NICI), Keck (Nirc2), and VLT (Naco). Massive young AF stars were included in our sample whereas they have generally been neglected in first generation surveys because the contrast and target distances are less favorable to image substellar companions. The most significant discovery of the campaign is the now well-known HR 8799 multi-planet system. This remarkable finding allows, for the first time, an estimate of the Jovians planet population at large separations (further than a few AUs) instead of deriving upper limits. During my presentation, I will present the survey showing images of multiple stars and planets. I will then propose a statistic study of the observed stars deriving constraints on the Jupiter-like planet frequency at large separations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favazza, C; Yu, L; Leng, S
2015-06-15
Purpose: To investigate using multiple CT image slices from a single acquisition as independent training images for a channelized Hotelling observer (CHO) model to reduce the number of repeated scans for CHO-based CT image quality assessment. Methods: We applied a previously validated CHO model to detect low contrast disk objects formed from cross-sectional images of three epoxy-resin-based rods (diameters: 3, 5, and 9 mm; length: ∼5cm). The rods were submerged in a 35x 25 cm2 iodine-doped water filled phantom, yielding-15 HU object contrast. The phantom was scanned 100 times with and without the rods present. Scan and reconstruction parameters include:more » 5 mm slice thickness at 0.5 mm intervals, 120 kV, 480 Quality Reference mAs, and a 128-slice scanner. The CHO’s detectability index was evaluated as a function of factors related to incorporating multi-slice image data: object misalignment along the z-axis, inter-slice pixel correlation, and number of unique slice locations. In each case, the CHO training set was fixed to 100 images. Results: Artificially shifting the object’s center position by as much as 3 pixels in any direction relative to the Gabor channel filters had insignificant impact on object detectability. An inter-slice pixel correlation of >∼0.2 yielded positive bias in the model’s performance. Incorporating multi-slice image data yielded slight negative bias in detectability with increasing number of slices, likely due to physical variations in the objects. However, inclusion of image data from up to 5 slice locations yielded detectability indices within measurement error of the single slice value. Conclusion: For the investigated model and task, incorporating image data from 5 different slice locations of at least 5 mm intervals into the CHO model yielded detectability indices within measurement error of the single slice value. Consequently, this methodology would Result in a 5-fold reduction in number of image acquisitions. This project was supported by National Institutes of Health grants R01 EB017095 and U01 EB017185 from the National Institute of Biomedical Imaging and Bioengineering.« less
Exploring Asteroid Interiors: The Deep Interior Mission Concept
NASA Technical Reports Server (NTRS)
Asphaug, E.; Belton, M. J. S.; Cangahuala, A.; Keith, L.; Klaasen, K.; McFadden, L.; Neumann, G.; Ostro, S. J.; Reinert, R.; Safaeinili, A.
2003-01-01
Deep Interior is a mission to determine the geophysical properties of near-Earth objects, including the first volumetric image of the interior of an asteroid. Radio reflection tomography will image the 3D distribution of complex dielectric properties within the 1 km rendezvous target and hence map structural, density or compositional variations. Laser altimetry and visible imaging will provide high-resolution surface topography. Smart surface pods culminating in blast experiments, imaged by the high frame rate camera and scanned by lidar, will characterize active mechanical behavior and structure of surface materials, expose unweathered surface for NIR analysis, and may enable some characterization of bulk seismic response. Multiple flybys en route to this target will characterize a diversity of asteroids, probing their interiors with non-tomographic radar reflectance experiments. Deep Interior is a natural follow-up to the NEARShoemaker mission and will provide essential guidance for future in situ asteroid and comet exploration. While our goal is to learn the interior geology of small bodies and how their surfaces behave, the resulting science will enable pragmatic technologies required of hazard mitigation and resource utilization.
NASA Astrophysics Data System (ADS)
Jang, Sun-Joo; Park, Taejin; Shin, Inho; Park, Hyun Sang; Shin, Paul; Oh, Wang-Yuhl
2016-02-01
Optical coherence tomography (OCT) is a useful imaging method for in vivo tissue imaging with deep penetration and high spatial resolution. However, imaging of the beating mouse heart is still challenging due to limited temporal resolution or penetration depth. Here, we demonstrate a multifunctional OCT system for a beating mouse heart, providing various types of visual information about heart pathophysiology with high spatiotemporal resolution and deep tissue imaging. Angiographic imaging and polarization-sensitive (PS) imaging were implemented with the electrocardiogram (ECG)-triggered beam scanning scheme on the high-speed OCT platform (A-line rate: 240 kHz). Depth-resolved local birefringence and the local orientation of the mouse myocardial fiber were visualized from the PS-OCT. ECG-triggered angiographic OCT (AOCT) with the custom-built motion stabilization imaging window provided myocardial vasculature of a beating mouse heart. Mice underwent coronary artery ligation to derive myocardial infarction (MI) and were imaged with the multifunctional OCT system at multiple time points. AOCT and PS-OCT visualize change of functionality of coronary vessels and myocardium respectively at different phases (acute and chronic) of MI in an ischemic mouse heart. Taken together, the integrated imaging of PS-OCT and AOCT would play an important role in study of MI providing multi-dimensional information of the ischemic mouse heart in vivo.
Steinman, Joe; Koletar, Margaret M.; Stefanovic, Bojana; Sled, John G.
2017-01-01
Ex vivo 2-photon fluorescence microscopy (2PFM) with optical clearing enables vascular imaging deep into tissue. However, optical clearing may also produce spherical aberrations if the objective lens is not index-matched to the clearing material, while the perfusion, clearing, and fixation procedure may alter vascular morphology. We compared in vivo and ex vivo 2PFM in mice, focusing on apparent differences in microvascular signal and morphology. Following in vivo imaging, the mice (four total) were perfused with a fluorescent gel and their brains fructose-cleared. The brain regions imaged in vivo were imaged ex vivo. Vessels were segmented in both images using an automated tracing algorithm that accounts for the spatially varying PSF in the ex vivo images. This spatial variance is induced by spherical aberrations caused by imaging fructose-cleared tissue with a water-immersion objective. Alignment of the ex vivo image to the in vivo image through a non-linear warping algorithm enabled comparison of apparent vessel diameter, as well as differences in signal. Shrinkage varied as a function of diameter, with capillaries rendered smaller ex vivo by 13%, while penetrating vessels shrunk by 34%. The pial vasculature attenuated in vivo microvascular signal by 40% 300 μm below the tissue surface, but this effect was absent ex vivo. On the whole, ex vivo imaging was found to be valuable for studying deep cortical vasculature. PMID:29053753
Optimal Dictionaries for Sparse Solutions of Multi-frame Blind Deconvolution
2014-09-01
object is the Hubble Space Telescope (HST). As stated above, the dictionary training used the first 100 of the total of the simulated PSFs. The second set...diffraction-limited Hubble image and HubbleRE is the reconstructed image from the 100 simulated atmospheric turbulence degraded images of the HST
NASA Astrophysics Data System (ADS)
Xu, Xia; Shi, Zhenwei; Pan, Bin
2018-07-01
Sparse unmixing aims at recovering pure materials from hyperpspectral images and estimating their abundance fractions. Sparse unmixing is actually ℓ0 problem which is NP-h ard, and a relaxation is often used. In this paper, we attempt to deal with ℓ0 problem directly via a multi-objective based method, which is a non-convex manner. The characteristics of hyperspectral images are integrated into the proposed method, which leads to a new spectra and multi-objective based sparse unmixing method (SMoSU). In order to solve the ℓ0 norm optimization problem, the spectral library is encoded in a binary vector, and a bit-wise flipping strategy is used to generate new individuals in the evolution process. However, a multi-objective method usually produces a number of non-dominated solutions, while sparse unmixing requires a single solution. How to make the final decision for sparse unmixing is challenging. To handle this problem, we integrate the spectral characteristic of hyperspectral images into SMoSU. By considering the spectral correlation in hyperspectral data, we improve the Tchebycheff decomposition function in SMoSU via a new regularization item. This regularization item is able to enforce the individual divergence in the evolution process of SMoSU. In this way, the diversity and convergence of population is further balanced, which is beneficial to the concentration of individuals. In the experiments part, three synthetic datasets and one real-world data are used to analyse the effectiveness of SMoSU, and several state-of-art sparse unmixing algorithms are compared.
Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing
NASA Astrophysics Data System (ADS)
Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.
2009-05-01
A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.
Handheld microwave bomb-detecting imaging system
NASA Astrophysics Data System (ADS)
Gorwara, Ashok; Molchanov, Pavlo
2017-05-01
Proposed novel imaging technique will provide all weather high-resolution imaging and recognition capability for RF/Microwave signals with good penetration through highly scattered media: fog, snow, dust, smoke, even foliage, camouflage, walls and ground. Image resolution in proposed imaging system is not limited by diffraction and will be determined by processor and sampling frequency. Proposed imaging system can simultaneously cover wide field of view, detect multiple targets and can be multi-frequency, multi-function. Directional antennas in imaging system can be close positioned and installed in cell phone size handheld device, on small aircraft or distributed around protected border or object. Non-scanning monopulse system allows dramatically decrease in transmitting power and at the same time provides increased imaging range by integrating 2-3 orders more signals than regular scanning imaging systems.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
The research of multi-frame target recognition based on laser active imaging
NASA Astrophysics Data System (ADS)
Wang, Can-jin; Sun, Tao; Wang, Tin-feng; Chen, Juan
2013-09-01
Laser active imaging is fit to conditions such as no difference in temperature between target and background, pitch-black night, bad visibility. Also it can be used to detect a faint target in long range or small target in deep space, which has advantage of high definition and good contrast. In one word, it is immune to environment. However, due to the affect of long distance, limited laser energy and atmospheric backscatter, it is impossible to illuminate the whole scene at the same time. It means that the target in every single frame is unevenly or partly illuminated, which make the recognition more difficult. At the same time the speckle noise which is common in laser active imaging blurs the images . In this paper we do some research on laser active imaging and propose a new target recognition method based on multi-frame images . Firstly, multi pulses of laser is used to obtain sub-images for different parts of scene. A denoising method combined homomorphic filter with wavelet domain SURE is used to suppress speckle noise. And blind deconvolution is introduced to obtain low-noise and clear sub-images. Then these sub-images are registered and stitched to combine a completely and uniformly illuminated scene image. After that, a new target recognition method based on contour moments is proposed. Firstly, canny operator is used to obtain contours. For each contour, seven invariant Hu moments are calculated to generate the feature vectors. At last the feature vectors are input into double hidden layers BP neural network for classification . Experiments results indicate that the proposed algorithm could achieve a high recognition rate and satisfactory real-time performance for laser active imaging.
a Region-Based Multi-Scale Approach for Object-Based Image Analysis
NASA Astrophysics Data System (ADS)
Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.
2016-06-01
Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.
Focal ratio degradation in lightly fused hexabundles
NASA Astrophysics Data System (ADS)
Bryant, J. J.; Bland-Hawthorn, J.; Fogarty, L. M. R.; Lawrence, J. S.; Croom, S. M.
2014-02-01
We are now moving into an era where multi-object wide-field surveys, which traditionally use single fibres to observe many targets simultaneously, can exploit compact integral field units (IFUs) in place of single fibres. Current multi-object integral field instruments such as Sydney-AAO Multi-object Integral field spectrograph have driven the development of new imaging fibre bundles (hexabundles) for multi-object spectrographs. We have characterized the performance of hexabundles with different cladding thicknesses and compared them to that of the same type of bare fibre, across the range of fill fractions and input f-ratios likely in an IFU instrument. Hexabundles with 7-cores and 61-cores were tested for focal ratio degradation (FRD), throughput and cross-talk when fed with inputs from F/3.4 to >F/8. The five 7-core bundles have cladding thickness ranging from 1 to 8 μm, and the 61-core bundles have 5 μm cladding. As expected, the FRD improves as the input focal ratio decreases. We find that the FRD and throughput of the cores in the hexabundles match the performance of single fibres of the same material at low input f-ratios. The performance results presented can be used to set a limit on the f-ratio of a system based on the maximum loss allowable for a planned instrument. Our results confirm that hexabundles are a successful alternative for fibre imaging devices for multi-object spectroscopy on wide-field telescopes and have prompted further development of hexabundle designs with hexagonal packing and square cores.
NASA Astrophysics Data System (ADS)
Kim, Sungho
2017-06-01
Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.
Lu, Donghuan; Popuri, Karteek; Ding, Gavin Weiguang; Balachandar, Rakesh; Beg, Mirza Faisal
2018-04-09
Alzheimer's Disease (AD) is a progressive neurodegenerative disease where biomarkers for disease based on pathophysiology may be able to provide objective measures for disease diagnosis and staging. Neuroimaging scans acquired from MRI and metabolism images obtained by FDG-PET provide in-vivo measurements of structure and function (glucose metabolism) in a living brain. It is hypothesized that combining multiple different image modalities providing complementary information could help improve early diagnosis of AD. In this paper, we propose a novel deep-learning-based framework to discriminate individuals with AD utilizing a multimodal and multiscale deep neural network. Our method delivers 82.4% accuracy in identifying the individuals with mild cognitive impairment (MCI) who will convert to AD at 3 years prior to conversion (86.4% combined accuracy for conversion within 1-3 years), a 94.23% sensitivity in classifying individuals with clinical diagnosis of probable AD, and a 86.3% specificity in classifying non-demented controls improving upon results in published literature.
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-01-01
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255
A multi-resolution approach for an automated fusion of different low-cost 3D sensors.
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-04-24
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.
GIS-based technology for marine geohazards in LW3-1 Gas Field of the South China Sea
NASA Astrophysics Data System (ADS)
Su, Tianyun; Liu, Lejun; Li, Xishuang; Hu, Guanghai; Liu, Haixing; Zhou, Lin
2013-04-01
The exploration and exploitation of deep-water oil-gas are apt to be suffered from high-risk geo-hazards such as submarine landslide, soft clay creep, shallow gas, excess pore-water pressure, mud volcano or mud diaper, salt dome and so on. Therefore, it is necessary to survey the seafloor topography, identify the unfavourable geological risks and investigate their environment and mechanism before exploiting the deep-water oil-gas. Because of complex environment, the submarine phenomenon and features, like marine geohazards, can not be recognized directly. Multi-disciplinary data are acquired and analysed comprehensively in order to get more clear understanding about the submarine processes. The data include multi-beam bathymetry data, sidescan sonar images, seismic data, shallow-bottom profiling images, boring data, etc.. Such data sets nowadays increase rapidly to large amounts, but may be heterogeneous and have different resolutions. It is difficult to make good management and utilization of such submarine data with traditional means. GIS technology can provide efficient and powerful tools or services in such aspects as spatial data management, processing, analysis and visualization. They further promote the submarine scientific research and engineering development. The Liwan 3-1 Gas Field, the first deep-water gas field in China, is located in the Zhu II Depression in the Zhujiang Basin along the continental slope of the northern South China Sea. The exploitation of this field is designed to establish subsea wellhead and to use submarine pipeline for the transportation of oil. The deep-water section of the pipeline route in the gas field is to be selected to pass through the northern continental slope of the South China Sea. To avoid huge economic loss and ecological environmental damage, it is necessary to evaluate the geo-hazards for the establishment and safe operation of the pipeline. Based on previous scientific research results, several survey cruises have been carried out with ships and AUV to collect multidisciplinary and massive submarine data such as multi-beam bathymetric data, sidescan sonar images, shallow-bottom profiling images, high-resolution multi-channel seismic data and boring test data. In order to make good use of these precious data, GIS technology is used in our research. Data model is designed to depict the structure, organization and relationship between multi disciplinary submarine data. With these data models, database is established to manage and share the attribute and spatial data effectively. The spatial datasets, such as contours, TIN models, DEM models, etc., can be generated. Some submarine characteristics, such as slope, aspects, curvature, landslide volume, etc., can be calculated and extracted with spatial analysis tools. The thematic map can be produced easily based on database and generated spatial dataset. Through thematic map, the multidisciplinary data spatial relationship can be easily established and provide helpful information for regional submarine geohazards identification, assessments and prediction. The produced thematic map of the LW3-1 Gas Field, reveal the strike of the seafloor topography to be NE to SW. Five geomorphological zones have been divided, which include the outer continental shelf margin zone with sand waves and mega-ripples, the continental slope zone with coral reefs and sand waves, the continental slope zone with a monocline shape, the continental slope zone with fault terraces and the continental slope zone with turbidity current deposits.
a New Object-Based Framework to Detect Shodows in High-Resolution Satellite Imagery Over Urban Areas
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand
NASA Astrophysics Data System (ADS)
Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.
2015-08-01
In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.
New MR imaging assessment tool to define brain abnormalities in very preterm infants at term.
Kidokoro, H; Neil, J J; Inder, T E
2013-01-01
WM injury is the dominant form of injury in preterm infants. However, other cerebral structures, including the deep gray matter and the cerebellum, can also be affected by injury and/or impaired growth. Current MR imaging injury assessment scales are subjective and are challenging to apply. Thus, we developed a new assessment tool and applied it to MR imaging studies obtained from very preterm infants at term age. MR imaging scans from 97 very preterm infants (< 30 weeks' gestation) and 22 healthy term-born infants were evaluated retrospectively. The severity of brain injury (defined by signal abnormalities) and impaired brain growth (defined with biometrics) was scored in the WM, cortical gray matter, deep gray matter, and cerebellum. Perinatal variables for clinical risks were collected. In very preterm infants, brain injury was observed in the WM (n=23), deep GM (n=5), and cerebellum (n=23). Combining measures of injury and impaired growth showed moderate to severe abnormalities most commonly in the WM (n=38) and cerebellum (n=32) but still notable in the cortical gray matter (n=16) and deep gray matter (n=11). WM signal abnormalities were associated with a reduced deep gray matter area but not with cerebellar abnormality. Intraventricular and/or parenchymal hemorrhage was associated with cerebellar signal abnormality and volume reduction. Multiple clinical risk factors, including prolonged intubation, prolonged parenteral nutrition, postnatal corticosteroid use, and postnatal sepsis, were associated with increased global abnormality on MR imaging. Very preterm infants demonstrate a high prevalence of injury and growth impairment in both the WM and gray matter. This MR imaging scoring system provides a more comprehensive and objective classification of the nature and extent of abnormalities than existing measures.
Selections from 2017: Hubble Survey Explores Distant Galaxies
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-12-01
Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.CANDELS Multi-Wavelength Catalogs: Source Identification and Photometry in the CANDELS COSMOSSurvey FieldPublished January2017Main takeaway:A publication led byHooshang Nayyeri(UC Irvine and UC Riverside) early this year details acatalog of sources built using the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey(CANDELS), a survey carried out by cameras on board the Hubble Space Telescope. The catalogliststhe properties of 38,000 distant galaxies visiblewithin the COSMOS field, a two-square-degree equatorial field explored in depthto answer cosmological questions.Why its interesting:Illustration showing the three-dimensional map of the dark matter distribution in theCOSMOS field. [Adapted from NASA/ESA/R. Massey(California Institute of Technology)]The depth and resolution of the CANDELS observations areuseful for addressingseveral major science goals, including the following:Studying the most distant objects in the universe at the epoch of reionization in the cosmic dawn.Understanding galaxy formation and evolution during the peak epoch of star formation in the cosmic high noon.Studying star formation from deep ultravioletobservations and studying cosmology from supernova observations.Why CANDELS is a major endeavor:CANDELS isthe largest multi-cycle treasury program ever approved on the Hubble Space Telescope using over 900 orbits between 2010 and 2013 withtwo cameras on board the spacecraftto study galaxy formation and evolution throughout cosmic time. The CANDELS images are all publicly available, and the new catalogrepresents an enormous source of information about distant objectsin our universe.CitationH. Nayyeri et al 2017 ApJS 228 7. doi:10.3847/1538-4365/228/1/7
A Balanced Comparison of Object Invariances in Monkey IT Neurons.
Ratan Murty, N Apurva; Arun, Sripati P
2017-01-01
Our ability to recognize objects across variations in size, position, or rotation is based on invariant object representations in higher visual cortex. However, we know little about how these invariances are related. Are some invariances harder than others? Do some invariances arise faster than others? These comparisons can be made only upon equating image changes across transformations. Here, we targeted invariant neural representations in the monkey inferotemporal (IT) cortex using object images with balanced changes in size, position, and rotation. Across the recorded population, IT neurons generalized across size and position both stronger and faster than to rotations in the image plane as well as in depth. We obtained a similar ordering of invariances in deep neural networks but not in low-level visual representations. Thus, invariant neural representations dynamically evolve in a temporal order reflective of their underlying computational complexity.
Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.
Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569
Report Of The HST Strategy Panel: A Strategy For Recovery
1991-01-01
orbit change out: the Wide Field/Planetary Camera II (WFPC II), the Near-Infrared Camera and Multi- Object Spectrometer (NICMOS) and the Space ...are the Space Telescope Imaging Spectrograph (STB), the Near-Infrared Camera and Multi- Object Spectrom- eter (NICMOS), and the second Wide Field and...expected to fail to lock due to duplicity was 20%; on- orbit data indicates that 10% may be a better estimate, but the guide stars were preselected
Phylogenetic convolutional neural networks in metagenomics.
Fioravanti, Diego; Giarratano, Ylenia; Maggio, Valerio; Agostinelli, Claudio; Chierici, Marco; Jurman, Giuseppe; Furlanello, Cesare
2018-03-08
Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user.
Shah, Sheel; Lubeck, Eric; Schwarzkopf, Maayan; He, Ting-Fang; Greenbaum, Alon; Sohn, Chang Ho; Lignell, Antti; Choi, Harry M T; Gradinaru, Viviana; Pierce, Niles A; Cai, Long
2016-08-01
Accurate and robust detection of mRNA molecules in thick tissue samples can reveal gene expression patterns in single cells within their native environment. Preserving spatial relationships while accessing the transcriptome of selected cells is a crucial feature for advancing many biological areas - from developmental biology to neuroscience. However, because of the high autofluorescence background of many tissue samples, it is difficult to detect single-molecule fluorescence in situ hybridization (smFISH) signals robustly in opaque thick samples. Here, we draw on principles from the emerging discipline of dynamic nucleic acid nanotechnology to develop a robust method for multi-color, multi-RNA imaging in deep tissues using single-molecule hybridization chain reaction (smHCR). Using this approach, single transcripts can be imaged using epifluorescence, confocal or selective plane illumination microscopy (SPIM) depending on the imaging depth required. We show that smHCR has high sensitivity in detecting mRNAs in cell culture and whole-mount zebrafish embryos, and that combined with SPIM and PACT (passive CLARITY technique) tissue hydrogel embedding and clearing, smHCR can detect single mRNAs deep within thick (0.5 mm) brain slices. By simultaneously achieving ∼20-fold signal amplification and diffraction-limited spatial resolution, smHCR offers a robust and versatile approach for detecting single mRNAs in situ, including in thick tissues where high background undermines the performance of unamplified smFISH. © 2016. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Matsui, Toshihisa; Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Satoh, Masaki; Hashino, Tempei; Kubota, Takuji
2016-01-01
A 14-year climatology of Tropical Rainfall Measuring Mission (TRMM) collocated multi-sensor signal statistics reveal a distinct land-ocean contrast as well as geographical variability of precipitation type, intensity, and microphysics. Microphysics information inferred from the TRMM precipitation radar and Microwave Imager (TMI) show a large land-ocean contrast for the deep category, suggesting continental convective vigor. Over land, TRMM shows higher echo-top heights and larger maximum echoes, suggesting taller storms and more intense precipitation, as well as larger microwave scattering, suggesting the presence of morelarger frozen convective hydrometeors. This strong land-ocean contrast in deep convection is invariant over seasonal and multi-year time-scales. Consequently, relatively short-term simulations from two global storm-resolving models can be evaluated in terms of their land-ocean statistics using the TRMM Triple-sensor Three-step Evaluation via a satellite simulator. The models evaluated are the NASA Multi-scale Modeling Framework (MMF) and the Non-hydrostatic Icosahedral Cloud Atmospheric Model (NICAM). While both simulations can represent convective land-ocean contrasts in warm precipitation to some extent, near-surface conditions over land are relatively moisture in NICAM than MMF, which appears to be the key driver in the divergent warm precipitation results between the two models. Both the MMF and NICAM produced similar frequencies of large CAPE between land and ocean. The dry MMF boundary layer enhanced microwave scattering signals over land, but only NICAM had an enhanced deep convection frequency over land. Neither model could reproduce a realistic land-ocean contrast in in deep convective precipitation microphysics. A realistic contrast between land and ocean remains an issue in global storm-resolving modeling.
VizieR Online Data Catalog: Photometry of LBGs, LAEs and GNBs at z~2.85 (Mostardi+, 2013)
NASA Astrophysics Data System (ADS)
Mostardi, R. E.; Shapley, A. E.; Nestor, D. B.; Steidel, C. C.; Reddy, N. A.; Trainor, R. F.
2017-11-01
We performed multi-object spectroscopy in 2011 May on the Keck 1 telescope, using the blue side of LRIS. We observed four slitmasks with exposure times of 16560, 9000, 8400, and 8100 s, respectively. For all masks, we used the 400 line/mm grism blazed at 3400 Å, achieving a spectral resolution of R=800 for 1.2" slits. The "d500" dichroic beam splitter was used for the first mask (originally designed for deep LyC spectroscopy) and the "d560" dichroic was used for the three additional masks (designed to acquire redshifts). The conditions during the observing run were suboptimal, with intermittent clouds and a seeing FWHM of 0.7"-1.0" during clear spells. When designing the slitmasks, we targeted both LBGs and LAEs with NB3420 detections. Slits were centered on the coordinates of the V (NB4670) centroid for LBGs (LAEs). While most LAEs were selected using the V-band image as the continuum band, a small fraction (20%) were selected using the G-band (henceforth referred to as GNBs). Overall, we observed 46 objects on the four slitmasks, 29 of which had repeat observations. (4 data files).
Chen, Zhixing; Wei, Lu; Zhu, Xinxin; Min, Wei
2012-08-13
It is highly desirable to be able to optically probe biological activities deep inside live organisms. By employing a spatially confined excitation via a nonlinear transition, multiphoton fluorescence microscopy has become indispensable for imaging scattering samples. However, as the incident laser power drops exponentially with imaging depth due to scattering loss, the out-of-focus fluorescence eventually overwhelms the in-focal signal. The resulting loss of imaging contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation intensity. Herein we propose to significantly extend this depth limit by multiphoton activation and imaging (MPAI) of photo-activatable fluorophores. The imaging contrast is drastically improved due to the created disparity of bright-dark quantum states in space. We demonstrate this new principle by both analytical theory and experiments on tissue phantoms labeled with synthetic caged fluorescein dye or genetically encodable photoactivatable GFP.
Noor, Siti Salwa Md; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang
2017-11-16
In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.
Development of a fast multi-line x-ray CT detector for NDT
NASA Astrophysics Data System (ADS)
Hofmann, T.; Nachtrab, F.; Schlechter, T.; Neubauer, H.; Mühlbauer, J.; Schröpfer, S.; Ernst, J.; Firsching, M.; Schweiger, T.; Oberst, M.; Meyer, A.; Uhlmann, N.
2015-04-01
Typical X-ray detectors for non-destructive testing (NDT) are line detectors or area detectors, like e.g. flat panel detectors. Multi-line detectors are currently only available in medical Computed Tomography (CT) scanners. Compared to flat panel detectors, line and multi-line detectors can achieve much higher frame rates. This allows time-resolved 3D CT scans of an object under investigation. Also, an improved image quality can be achieved due to reduced scattered radiation from object and detector themselves. Another benefit of line and multi-line detectors is that very wide detectors can be assembled easily, while flat panel detectors are usually limited to an imaging field with a size of approx. 40 × 40 cm2 at maximum. The big disadvantage of line detectors is the limited number of object slices that can be scanned simultaneously. This leads to long scan times for large objects. Volume scans with a multi-line detector are much faster, but with almost similar image quality. Due to the promising properties of multi-line detectors their application outside of medical CT would also be very interesting for NDT. However, medical CT multi-line detectors are optimized for the scanning of human bodies. Many non-medical applications require higher spatial resolutions and/or higher X-ray energies. For those non-medical applications we are developing a fast multi-line X-ray detector.In the scope of this work, we present the current state of the development of the novel detector, which includes several outstanding properties like an adjustable curved design for variable focus-detector-distances, conserving nearly uniform perpendicular irradiation over the entire detector width. Basis of the detector is a specifically designed, radiation hard CMOS imaging sensor with a pixel pitch of 200 μ m. Each pixel has an automatic in-pixel gain adjustment, which allows for both: a very high sensitivity and a wide dynamic range. The final detector is planned to have 256 lines of pixels. By using a modular assembly of the detector, the width can be chosen as multiples of 512 pixels. With a frame rate of up to 300 frames/s (full resolution) or 1200 frame/s (analog binning to 400 μ m pixel pitch) time-resolved 3D CT applications become possible. Two versions of the detector are in development, one with a high resolution scintillator and one with a thick, structured and very efficient scintillator (pitch 400 μ m). This way the detector can even work with X-ray energies up to 450 kVp.
Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
Buried object remote detection technology for law enforcement
NASA Astrophysics Data System (ADS)
del Grande, Nancy K.; Clark, Gregory A.; Durbin, Philip F.; Fields, David J.; Hernandez, Jose E.; Sherwood, Robert J.
1991-08-01
A precise airborne temperature-sensing technology to detect buried objects for use by law enforcement is developed. Demonstrations have imaged the sites of buried foundations, walls and trenches; mapped underground waterways and aquifers; and been used to locate underground military objects. The methodology is incorporated in a commercially available, high signal-to-noise, dual-band infrared scanner with real-time, 12-bit digital image processing software and display. The method creates color-coded images based on surface temperature variations of 0.2 degree(s)C. Unlike other less-sensitive methods, it maps true (corrected) temperatures by removing the (decoupled) surface emissivity mask equivalent to 1 degree(s)C or 2 degree(s)C; this mask hinders interpretation of apparent (blackbody) temperatures. Once removed, it is possible to identify surface temperature patterns from small diffusivity changes at buried object sites which heat and cool differently from their surroundings. Objects made of different materials and buried at different depths are identified by their unique spectral, spatial, thermal, temporal, emissivity and diffusivity signatures. The authors have successfully located the sites of buried (inert) simulated land mines 0.1 to 0.2 m deep; sod-covered rock pathways alongside dry ditches, deeper than 0.2 m; pavement covered burial trenches and cemetery structures as deep as 0.8 m; and aquifers more than 6 m and less than 60 m deep. The technology could be adapted for drug interdiction and pollution control. For the former, buried tunnels, underground structures built beneath typical surface structures, roof-tops disguised by jungle canopies, and covered containers used for contraband would be located. For the latter, buried waste containers, sludge migration pathways from faulty containers, and the juxtaposition of groundwater channels, if present, nearby, would be depicted. The precise airborne temperature-sensing technology has a promising potential to detect underground epicenters of smuggling and pollution.
Teamwork Reasoning and Multi-Satellite Missions
NASA Technical Reports Server (NTRS)
Marsella, Stacy C.; Plaunt, Christian (Technical Monitor)
2002-01-01
NASA is rapidly moving towards the use of spatially distributed multiple satellites operating in near Earth orbit and Deep Space. Effective operation of such multi-satellite constellations raises many key research issues. In particular, the satellites will be required to cooperate with each other as a team that must achieve common objectives with a high degree of autonomy from ground based operations. The multi-agent research community has made considerable progress in investigating the challenges of realizing such teamwork. In this report, we discuss some of the teamwork issues that will be faced by multi-satellite operations. The basis of the discussion is a particular proposed mission, the Magnetospheric MultiScale mission to explore Earth's magnetosphere. We describe this mission and then consider how multi-agent technologies might be applied in the design and operation of these missions. We consider the potential benefits of these technologies as well as the research challenges that will be raised in applying them to NASA multi-satellite missions. We conclude with some recommendations for future work.
VizieR Online Data Catalog: Spectroscopy of 104 objects in the ONC (Ingraham+, 2014)
NASA Astrophysics Data System (ADS)
Ingraham, P.; Albert, L.; Doyon, R.; Artigau, E.
2016-03-01
In 2003 December, we obtained six nights (on CFHT to perform MOS observations of faint objects in the central region of the Orion Trapezium cluster. The observations used the infrared imager and multi-object spectrograph SIMON (Spectrometre Infrarouge de Montreal). The optical design is fully achromatic between 0.8 and 2.5μm and features a HAWAII-I 1024*1024 HgCdTe detector with an image scale of 0.2'' on CFHT. SIMON utilizes a low-dispersion Amici prism enabling multi-object low-resolution (R~30) spectroscopy over the wavelength range of 0.9-2.4μm. The slit width, in the spectral direction, was chosen to be 0.6'' (3pixels) resulting in a spectral resolution of R~30. In total, spectra for 240 point sources were obtained. Here, we present only the 104 objects (see Table5) with low-extinction (AV<8) spectra having well constrained spectral types. (2 data files).
Deep learning of symmetrical discrepancies for computer-aided detection of mammographic masses
NASA Astrophysics Data System (ADS)
Kooi, Thijs; Karssemeijer, Nico
2017-03-01
When humans identify objects in images, context is an important cue; a cheetah is more likely to be a domestic cat when a television set is recognised in the background. Similar principles apply to the analysis of medical images. The detection of diseases that manifest unilaterally in symmetrical organs or organ pairs can in part be facilitated by a search for symmetrical discrepancies in or between the organs in question. During a mammographic exam, images are recorded of each breast and absence of a certain structure around the same location in the contralateral image will render the area under scrutiny more suspicious and conversely, the presence of similar tissue less so. In this paper, we present a fusion scheme for a deep Convolutional Neural Network (CNN) architecture with the goal to optimally capture such asymmetries. The method is applied to the domain of mammography CAD, but can be relevant to other medical image analysis tasks where symmetry is important such as lung, prostate or brain images.
Triple Asteroid System Triples Asteroid Observers Interest
2009-08-06
NASA Deep Space Network, Goldstone radar images show triple asteroid 1994 CC, which consists of a central object approximately 700 meters 2,300 feet in diameter and two smaller moons that orbit the central body. Animation available at the Photojournal
NASA Astrophysics Data System (ADS)
Ando, Yoriko; Sawahata, Hirohito; Kawano, Takeshi; Koida, Kowa; Numano, Rika
2018-02-01
Bundled fiber optics allow in vivo imaging at deep sites in a body. The intrinsic optical contrast detects detailed structures in blood vessels and organs. We developed a bundled-fiber-coupled endomicroscope, enabling stereoscopic three-dimensional (3-D) reflectance imaging with a multipositional illumination scheme. Two illumination sites were attached to obtain reflectance images with left and right illumination. Depth was estimated by the horizontal disparity between the two images under alternative illuminations and was calibrated by the targets with known depths. This depth reconstruction was applied to an animal model to obtain the 3-D structure of blood vessels of the cerebral cortex (Cereb cortex) and preputial gland (Pre gla). The 3-D endomicroscope could be instrumental to microlevel reflectance imaging, improving the precision in subjective depth perception, spatial orientation, and identification of anatomical structures.
HUBBLE SPIES BROWN DWARFS IN NEARBY STELLAR NURSERY
NASA Technical Reports Server (NTRS)
2002-01-01
Probing deep within a neighborhood stellar nursery, NASA's Hubble Space Telescope uncovered a swarm of newborn brown dwarfs. The orbiting observatory's near-infrared camera revealed about 50 of these objects throughout the Orion Nebula's Trapezium cluster [image at right], about 1,500 light-years from Earth. Appearing like glistening precious stones surrounding a setting of sparkling diamonds, more than 300 fledgling stars and brown dwarfs surround the brightest, most massive stars [center of picture] in Hubble's view of the Trapezium cluster's central region. All of the celestial objects in the Trapezium were born together in this hotbed of star formation. The cluster is named for the trapezoidal alignment of those central massive stars. Brown dwarfs are gaseous objects with masses so low that their cores never become hot enough to fuse hydrogen, the thermonuclear fuel stars like the Sun need to shine steadily. Instead, these gaseous objects fade and cool as they grow older. Brown dwarfs around the age of the Sun (5 billion years old) are very cool and dim, and therefore are difficult for telescopes to find. The brown dwarfs discovered in the Trapezium, however, are youngsters (1 million years old). So they're still hot and bright, and easier to see. This finding, along with observations from ground-based telescopes, is further evidence that brown dwarfs, once considered exotic objects, are nearly as abundant as stars. The image and results appear in the Sept. 20 issue of the Astrophysical Journal. The brown dwarfs are too dim to be seen in a visible-light image taken by the Hubble telescope's Wide Field and Planetary Camera 2 [picture at left]. This view also doesn't show the assemblage of infant stars seen in the near-infrared image. That's because the young stars are embedded in dense clouds of dust and gas. The Hubble telescope's near-infrared camera, the Near Infrared Camera and Multi-Object Spectrometer, penetrated those clouds to capture a view of those objects. The brown dwarfs are the faintest objects in the image. Surveying the cluster's central region, the Hubble telescope spied brown dwarfs with masses equaling 10 to 80 Jupiters. Researchers think there may be less massive brown dwarfs that are beyond the limits of Hubble's vision. The near-infrared image was taken Jan. 17, 1998. Two near-infrared filters were used to obtain information on the colors of the stars at two wavelengths (1.1 and 1.6 microns). The Trapezium picture is 1 light-year across. This composite image was made from a 'mosaic' of nine separate, but adjoining images. In this false-color image, blue corresponds to warmer, more massive stars, and red to cooler, less massive stars and brown dwarfs, and stars that are heavily obscured by dust. The visible-light data were taken in 1994 and 1995. Credits for near-infrared image: NASA; K.L. Luhman (Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass.); and G. Schneider, E. Young, G. Rieke, A. Cotera, H. Chen, M. Rieke, R. Thompson (Steward Observatory, University of Arizona, Tucson, Ariz.) Credits for visible-light picture: NASA, C.R. O'Dell and S.K. Wong (Rice University)
An intelligent framework for medical image retrieval using MDCT and multi SVM.
Balan, J A Alex Rajju; Rajan, S Edward
2014-01-01
Volumes of medical images are rapidly generated in medical field and to manage them effectively has become a great challenge. This paper studies the development of innovative medical image retrieval based on texture features and accuracy. The objective of the paper is to analyze the image retrieval based on diagnosis of healthcare management systems. This paper traces the development of innovative medical image retrieval to estimate both the image texture features and accuracy. The texture features of medical images are extracted using MDCT and multi SVM. Both the theoretical approach and the simulation results revealed interesting observations and they were corroborated using MDCT coefficients and SVM methodology. All attempts to extract the data about the image in response to the query has been computed successfully and perfect image retrieval performance has been obtained. Experimental results on a database of 100 trademark medical images show that an integrated texture feature representation results in 98% of the images being retrieved using MDCT and multi SVM. Thus we have studied a multiclassification technique based on SVM which is prior suitable for medical images. The results show the retrieval accuracy of 98%, 99% for different sets of medical images with respect to the class of image.
Generation and evaluation of an ultra-high-field atlas with applications in DBS planning
NASA Astrophysics Data System (ADS)
Wang, Brian T.; Poirier, Stefan; Guo, Ting; Parrent, Andrew G.; Peters, Terry M.; Khan, Ali R.
2016-03-01
Purpose Deep brain stimulation (DBS) is a common treatment for Parkinson's disease (PD) and involves the use of brain atlases or intrinsic landmarks to estimate the location of target deep brain structures, such as the subthalamic nucleus (STN) and the globus pallidus pars interna (GPi). However, these structures can be difficult to localize with conventional clinical magnetic resonance imaging (MRI), and thus targeting can be prone to error. Ultra-high-field imaging at 7T has the ability to clearly resolve these structures and thus atlases built with these data have the potential to improve targeting accuracy. Methods T1 and T2-weighted images of 12 healthy control subjects were acquired using a 7T MR scanner. These images were then used with groupwise registration to generate an unbiased average template with T1w and T2w contrast. Deep brain structures were manually labelled in each subject by two raters and rater reliability was assessed. We compared the use of this unbiased atlas with two other methods of atlas-based segmentation (single-template and multi-template) for subthalamic nucleus (STN) segmentation on 7T MRI data. We also applied this atlas to clinical DBS data acquired at 1.5T to evaluate its efficacy for DBS target localization as compared to using a standard atlas. Results The unbiased templates provide superb detail of subcortical structures. Through one-way ANOVA tests, the unbiased template is significantly (p <0.05) more accurate than a single-template in atlas-based segmentation and DBS target localization tasks. Conclusion The generated unbiased averaged templates provide better visualization of deep brain nuclei and an increase in accuracy over single-template and lower field strength atlases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehghan, S.; Johnston-Hollitt, M.; Franzen, T. M. O.
2014-11-01
Using the 1.4 GHz Australia Telescope Large Area Survey, supplemented by the 1.4 GHz Very Large Array images, we undertook a search for bent-tailed (BT) radio galaxies in the Chandra Deep Field South. Here we present a catalog of 56 detections, which include 45 BT sources, 4 diffuse low-surface-brightness objects (1 relic, 2 halos, and 1 unclassified object), and a further 7 complex, multi-component sources. We report BT sources with rest-frame powers in the range 10{sup 22} ≤ P {sub 1.4} {sub GHz} ≤ 10{sup 26} W Hz{sup –1}, with redshifts up to 2 and linear extents from tens ofmore » kiloparsecs up to about 1 Mpc. This is the first systematic study of such sources down to such low powers and high redshifts and demonstrates the complementary nature of searches in deep, limited area surveys as compared to shallower, large surveys. Of the sources presented here, one is the most distant BT source yet detected at a redshift of 2.1688. Two of the sources are found to be associated with known clusters: a wide-angle tail source in A3141 and a putative radio relic which appears at the infall region between the galaxy group MZ 00108 and the galaxy cluster AMPCC 40. Further observations are required to confirm the relic detection, which, if successful, would demonstrate this to be the least powerful relic yet seen with P {sub 1.4} {sub GHz} = 9 × 10{sup 22} W Hz{sup –1}. Using these data, we predict future 1.4 GHz all-sky surveys with a resolution of ∼10 arcsec and a sensitivity of 10 μJy will detect of the order of 560,000 extended low-surface-brightness radio sources of which 440,000 will have a BT morphology.« less
Holographic leaky-wave metasurfaces for dual-sensor imaging.
Li, Yun Bo; Li, Lian Lin; Cai, Ben Geng; Cheng, Qiang; Cui, Tie Jun
2015-12-10
Metasurfaces have huge potentials to develop new type imaging systems due to their abilities of controlling electromagnetic waves. Here, we propose a new method for dual-sensor imaging based on cross-like holographic leaky-wave metasurfaces which are composed of hybrid isotropic and anisotropic surface impedance textures. The holographic leaky-wave radiations are generated by special impedance modulations of surface waves excited by the sensor ports. For one independent sensor, the main leaky-wave radiation beam can be scanned by frequency in one-dimensional space, while the frequency scanning in the orthogonal spatial dimension is accomplished by the other sensor. Thus, for a probed object, the imaging plane can be illuminated adequately to obtain the two-dimensional backward scattered fields by the dual-sensor for reconstructing the object. The relativity of beams under different frequencies is very low due to the frequency-scanning beam performance rather than the random beam radiations operated by frequency, and the multi-illuminations with low relativity are very appropriate for multi-mode imaging method with high resolution and anti- noise. Good reconstruction results are given to validate the proposed imaging method.
NASA Astrophysics Data System (ADS)
Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.
2016-03-01
Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.
Integral-geometry characterization of photobiomodulation effects on retinal vessel morphology
Barbosa, Marconi; Natoli, Riccardo; Valter, Kriztina; Provis, Jan; Maddess, Ted
2014-01-01
The morphological characterization of quasi-planar structures represented by gray-scale images is challenging when object identification is sub-optimal due to registration artifacts. We propose two alternative procedures that enhances object identification in the integral-geometry morphological image analysis (MIA) framework. The first variant streamlines the framework by introducing an active contours segmentation process whose time step is recycled as a multi-scale parameter. In the second variant, we used the refined object identification produced in the first variant to perform the standard MIA with exact dilation radius as multi-scale parameter. Using this enhanced MIA we quantify the extent of vaso-obliteration in oxygen-induced retinopathic vascular growth, the preventative effect (by photobiomodulation) of exposure during tissue development to near-infrared light (NIR, 670 nm), and the lack of adverse effects due to exposure to NIR light. PMID:25071966
Multi-camera digital image correlation method with distributed fields of view
NASA Astrophysics Data System (ADS)
Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata
2017-11-01
A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.
Chandra and the VLT Jointly Investigate the Cosmic X-Ray Background
NASA Astrophysics Data System (ADS)
2001-03-01
Summary Important scientific advances often happen when complementary investigational techniques are brought together . In the present case, X-ray and optical/infrared observations with some of the world's foremost telescopes have provided the crucial information needed to solve a 40-year old cosmological riddle. Very detailed observations of a small field in the southern sky have recently been carried out, with the space-based NASA Chandra X-Ray Observatory as well as with several ground-based ESO telescopes, including the Very Large Telescope (VLT) at the Paranal Observatory (Chile). Together, they have provided the "deepest" combined view at X-ray and visual/infrared wavelengths ever obtained into the distant Universe. The concerted observational effort has already yielded significant scientific results. This is primarily due to the possibility to 'identify' most of the X-ray emitting objects detected by the Chandra X-ray Observatory on ground-based optical/infrared images and then to determine their nature and distance by means of detailed (spectral) observations with the VLT . In particular, there is now little doubt that the so-called 'X-ray background' , a seemingly diffuse short-wave radiation first detected in 1962, in fact originates in a vast number of powerful black holes residing in active nuclei of distant galaxies . Moreover, the present investigation has permitted to identify and study in some detail a prime example of a hitherto little known type of object, a distant, so-called 'Type II Quasar' , in which the central black hole is deeply embedded in surrounding gas and dust. These achievements are just the beginning of a most fruitful collaboration between "space" and "ground". It is yet another impressive demonstration of the rapid progress of modern astrophysics, due to the recent emergence of a new generation of extremely powerful instruments. PR Photo 09a/01 : Images of a small part of the Chandra Deep Field South , obtained with ESO telescopes in three different wavebands. PR Photo 09b/01 : A VLT/FORS1 spectrum of a 'Type II Quasar' discovered during this programme. The 'Chandra Deep Field South' and the X-Ray Background ESO PR Photo 09a/01 ESO PR Photo 09a/01 [Preview - JPEG: 400 x 183 pix - 76k] [Normal - JPEG: 800 x 366 pix - 208k] [Hires - JPEG: 3000 x 1453 pix - 1.4M] Caption : PR Photo 09a/01 shows optical/infrared images in three wavebands ('Blue', 'Red', 'Infrared') from ESO telescopes of the Type II Quasar CXOCDFS J033229.9 -275106 (at the centre), one of the distant X-ray sources identified in the Chandra Deep Field South (CDFS) area during the present study. Technical information about these photos is available below. The 'Chandra Deep Field South (CDFS)' is a small sky area in the southern constellation Fornax (The Oven). It measures about 16 arcmin across, or roughly half the diameter of the full moon. There is unusually little gas and dust within the Milky Way in this direction and observations towards the distant Universe within this field thus profit from an particularly clear view. That is exactly why this sky area was selected by an international team of astronomers [1] to carry out an ultra-deep survey of X-ray sources with the orbiting Chandra X-Ray Observatory . In order to detect the faintest possible sources, NASA's satellite telescope looked in this direction during an unprecedented total of almost 1 million seconds of exposure time (11.5 days). The main scientific goal of this survey is to understand the nature and evolution of the elusive sources that make up the 'X-ray background' . This diffuse glare in the X-ray sky was discovered by Riccardo Giacconi and his collaborators during a pioneering rocket experiment in 1962. The excellent imaging quality of Chandra (the angular resolution is about 1 arcsec) makes it possible to do extremely deep exposures without encountering problems introduced by the "confusion effect". This refers to the overlapping of images of sources that are seen close to each other in the sky and thus are difficult to study individually. Previous X-ray satellites were not able to obtain sufficiently sharp X-ray images and the earlier deep X-ray surveys therefore suffered severely from this effect. Moreover, Chandra has much better sensitivity at shorter wavelengths (higher energies) which are less affected by obscuration effects. It can therefore better detect faint sources that emit very energetic ("hard") X-rays. X-ray and optical surveys in the Chandra Deep Field South The one-million second Chandra observations were completed in December 2000. In parallel, a group of astronomers based at institutes in Europe and the USA (the CFDS-team [1]) has been collecting deep images and extensive spectroscopic data with the VLT during the past 2 years (cf. PR Photo 09a/01 ). Their aim was to 'identify' the Chandra X-ray sources, i.e., to unveil their nature and measure their distances. For the identification of these sources, the team has also made extensive use of the observations that were carried out as a part of the comprehensive ESO Imaging Survey Project (EIS). More than 300 X-ray sources were detected in the CDFS by Chandra . A significant fraction of these objects shine so faintly in the optical and near-infrared wavebands that only long-exposure observations with the VLT have been able to detect them. During five observing nights with the FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope in October and November 2000, the CDFS team was able to identify and obtain spectra of more than one hundred of the X-ray sources registered by Chandra . Nature of the X-ray sources The first results from this study have now confirmed that the 'hard' X-ray background is mainly due to Active Galactic Nuclei (AGN) . The observations also reveal that a large fraction of them are of comparatively low brightness (referred to as 'low-luminosity AGN'), heavily enshrouded by dust and located at distances of 8,000 - 9,000 million light-years (corresponding to a redshift of about 1 and a look-back time of 57% of the age of the Universe [2]) . It is generally believed that all these sources are powered by massive black holes at their centres. Previous X-ray surveys missed most of these objects because they were too faint to be observed by the telescopes then available, in particular at short X-ray wavelengths ('hard X-ray photons') where more radiation from the highly active centres is able to pass through the surrounding, heavily absorbing gas and dust clouds. Other types of well-known X-ray sources, e.g., QSOs ('quasars' = high-luminosity AGN) as well as clusters or groups of galaxies were also detected during these observations. Studies of all classes of objects in the CDFS are also being carried out by several other European groups. This sky field, already a standard reference in the southern hemisphere, will be the subject of several multi-wavelength investigations for many years to come. A prime example will be the Great Observatories Origins Deep Survey (GOODS) which will be carried out by the NASA SIRTF infrared satellite in 2003. Discovery of a distant Type II Quasar ESO PR Photo 09b/01 ESO PR Photo 09b/01 [Preview - JPEG: 400 x 352 pix - 56k] [Normal - JPEG: 800 x 703 pix - 128k] Caption : PR Photo 09b/01 displays the optical spectrum of the distant Type II Quasar CXOCDFS J033229.9 -275106 in the Chandra Deep Field South (CDFS), obtained with the FORS1 multi-mode instrument at VLT ANTU. Strong, redshifted emission lines of Hydrogen and ionised Helium, Oxygen, Nitrogen and Carbon are marked. Technical information about this photo is available below. One particular X-ray source that was identified with the VLT during the present investigation has attracted much attention - it is the discovery of a dust-enshrouded quasar (QSO) at very high redshift ( z = 3.7, corresponding to a distance of about 12,000 million light-years; [2]), cf. PR Photo 09a/01 and PR Photo 09b/01 . It is the first very distant representative of this elusive class of objects (referred to as ' Type II Quasars ') which are believed to account for approximately 90% of the black-hole-powered quasars in the distant Universe. The 'sum' of the identified Chandra X-ray sources in the CDFS was found to match both the intensity and the spectral properties of the observed X-ray background. This important result is a significant step forward towards the definitive resolution of this long-standing cosmological problem. Naturally, ESO astronomer Piero Rosati and his colleagues are thrilled: " It is clearly the combination of the new and detailed Chandra X-ray observations and the enormous light-gathering power of the VLT that has been instrumental to this success. " However, he says, " the identification of the remaining Chandra X-ray sources will be the next challenge for the VLT since they are extremely faint. This is because they are either heavily obscured by dust or because they are extremely distant ". More Information This Press Release is issued simultaneously with a NASA Press Release (see also the Harvard site ). Some of the first results are described in a research paper ("First Results from the X-ray and Optical Survey of the Chandra Deep Field South" available on the web at astro-ph/0007240. More information about science results from the Chandra X-Ray Observatory may be found at: http://asc.harvard.edu/. The optical survey of CDFS at ESO with the Wide-Field Imager is described in connection with PR Photos 46a-b/99 ('100,000 galaxies at a glance'). An image of the Chandra Deep Field South is available at the ESO website on the EIS Image Gallery webpage. . Notes [1]: The Chandra Team is lead by Riccardo Giacconi (Association of Universities Inc. [AUI], Washington, USA) and includes: Piero Rosati , Jacqueline Bergeron , Roberto Gilmozzi , Vincenzo Mainieri , Peter Shaver (European Southern Observatory [ESO]), Paolo Tozzi , Mario Nonino , Stefano Borgani (Osservatorio Astronomico, Trieste, Italy), Guenther Hasinger , Gyula Szokoly (Astrophysical Institute Potsdam [AIP], Germany), Colin Norman , Roberto Gilli , Lisa Kewley , Wei Zheng , Andrew Zirm , JungXian Wang (Johns Hopkins University [JHU], Baltimore, USA), Ken Kellerman (National Radio Astronomy Observatory [NRAO], Charlottesville, USA), Ethan Schreier , Anton Koekemoer and Norman Grogin (Space Telescope Science Institute (STScI), Baltimore, USA). [2] In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy or quasar gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with the distance, the velocity is itself a function (the Hubble relation) of the distance to the object. Redshifts of 1 and 3.7 correspond to when the Universe was about 43% and 12% of its present age. The distances indicated in this Press Release depend on the cosmological model chosen and are based on an age of 19,000 million years. Technical information about the photos PR Photo 09a/01 shows B-, R- and I-band images of a 20 x 20 arcsec 2 area within the CDFS, centred on the Type II Quasar CXOCDFS J033229.9 -275106 . They were obtained with the MPG/ESO 2.2-m telescope and the Wide-Field Imager (WFI) at La Silla (B-band; 8 hrs exposure time) and the 8.2-m VLT ANTU telescope with the FORS1 multi-mode instrument at Paranal (R- and I-bands; each 2 hrs exposure). The measured magnitudes are R=23.5 and I=22.7. The overlaid contours show the associated Chandra X-ray source (smoothed with a sigma = 1 arcsec gaussian profile). North is up and East is left. The spectrum shown in PR Photo 09b/01 was obtained on November 25, 2000, with VLT ANTU and FORS1 in the multislit mode (150-I grism, 1.2 arcsec slit). The exposure time was 3 hours.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
A multi-scale convolutional neural network for phenotyping high-content cellular images.
Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian
2017-07-01
Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-01-01
ABSTRACT Background: Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Objective: Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. Methods: A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Results: Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3–100%) in the test set (n = 217) of manually labeled helminth eggs. Conclusions: In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images. PMID:28838305
Multi-wavelength Polarimetry of the GF9-2 YSO
NASA Astrophysics Data System (ADS)
Clemens, Dan P.; El-Batal, Adham M.; Montgomery, Jordan; Kressy, Sophia; Schroeder, Genevieve; Pillai, Thushara
2018-06-01
Our new SOFIA/HAWC+ 214 μm polarimetry of the cloud core containing the young stellar object GF9-2 (IRAS 20503+6006, aka L1082C) has been combined with deep near-infrared H- and K-band polarimetry of the cloud's core, obtained with the Mimir instrument. Additionally, Planck 870 μm and published optical polarimetry are included to provide context at larger size scales. We follow the direction and structure of the plane-of-sky magnetic field from the smallest physical scales (~10 arcsec or 4,000 AU) traced by SOFIA/HAWC+ to the Mimir field of view (10 arcmin, or 1.3 pc) and compare the B-field orientation with that of a faint reflection nebula seen in WISE and Spitzer images. The importance, or lack thereof, for the B-field in this naescent star-forming region is assessed through estimates of the Mass-to-Flux (M/Φ) ratio.This work has been supported by NSF AST14-12269, NASA NNX15AE51G, and USRA/SOF 04-0014 grants
A Multi-Wavelength Survey of Intermediate-Mass Star-Forming Regions
NASA Astrophysics Data System (ADS)
Lundquist, Michael J.; Kobulnicky, Henry A.; Kerton, Charles R.
2015-01-01
Current research into Galactic star formation has focused on either massive star-forming regions or nearby low-mass regions. We present results from a survey of Galactic intermediate-mass star-forming regions (IM SFRs). These regions were selected from IRAS colors that specify cool dust and large PAH contribution, suggesting that they produce stars up to but not exceeding about 8 solar masses. Using WISE data we have classified 984 candidate IM SFRs as star-like objects, galaxies, filamentary structures, or blobs/shells based on their mid-infrared morphologies. Focusing on the blobs/shells, we combined follow-up observations of deep near-infrared (NIR) imaging with optical and NIR spectroscopy to study the stellar content, confirming the intermediate-mass nature of these regions. We also gathered CO data from OSO and APEX to study the molecular content and dynamics of these regions. We compare these results to those of high-mass star formation in order to better understand their role in the star-formation paradigm.
Visual feature extraction from voxel-weighted averaging of stimulus images in 2 fMRI studies.
Hart, Corey B; Rose, William J
2013-11-01
Multiple studies have provided evidence for distributed object representation in the brain, with several recent experiments leveraging basis function estimates for partial image reconstruction from fMRI data. Using a novel combination of statistical decomposition, generalized linear models, and stimulus averaging on previously examined image sets and Bayesian regression of recorded fMRI activity during presentation of these data sets, we identify a subset of relevant voxels that appear to code for covarying object features. Using a technique we term "voxel-weighted averaging," we isolate image filters that these voxels appear to implement. The results, though very cursory, appear to have significant implications for hierarchical and deep-learning-type approaches toward the understanding of neural coding and representation.
The DEEP2 Galaxy Redshift Survey: Design, Observations, Data Reduction, and Redshifts
NASA Technical Reports Server (NTRS)
Newman, Jeffrey A.; Cooper, Michael C.; Davis, Marc; Faber, S. M.; Coil, Alison L; Guhathakurta, Puraga; Koo, David C.; Phillips, Andrew C.; Conroy, Charlie; Dutton, Aaron A.;
2013-01-01
We describe the design and data analysis of the DEEP2 Galaxy Redshift Survey, the densest and largest high-precision redshift survey of galaxies at z approx. 1 completed to date. The survey was designed to conduct a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude MB = -20 at z approx. 1 via approx.90 nights of observation on the Keck telescope. The survey covers an area of 2.8 Sq. deg divided into four separate fields observed to a limiting apparent magnitude of R(sub AB) = 24.1. Objects with z approx. < 0.7 are readily identifiable using BRI photometry and rejected in three of the four DEEP2 fields, allowing galaxies with z > 0.7 to be targeted approx. 2.5 times more efficiently than in a purely magnitude-limited sample. Approximately 60% of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets that fail to yield secure redshifts are blue objects that lie beyond z approx. 1.45, where the [O ii] 3727 Ang. doublet lies in the infrared. The DEIMOS 1200 line mm(exp -1) grating used for the survey delivers high spectral resolution (R approx. 6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the DEEP2 DEIMOS data reduction pipelines. Extensive details are provided on object selection, mask design, biases in target selection and redshift measurements, the spec2d two-dimensional data-reduction pipeline, the spec1d automated redshift pipeline, and the zspec visual redshift verification process, along with examples of instrumental signatures or other artifacts that in some cases remain after data reduction. Redshift errors and catastrophic failure rates are assessed through more than 2000 objects with duplicate observations. Sky subtraction is essentially photon-limited even under bright OH sky lines; we describe the strategies that permitted this, based on high image stability, accurate wavelength solutions, and powerful B-spline modeling methods. We also investigate the impact of targets that appear to be single objects in ground-based targeting imaging but prove to be composite in Hubble Space Telescope data; they constitute several percent of targets at z approx. 1, approaching approx. 5%-10% at z > 1.5. Summary data are given that demonstrate the superiority of DEEP2 over other deep high-precision redshift surveys at z approx. 1 in terms of redshift accuracy, sample number density, and amount of spectral information. We also provide an overview of the scientific highlights of the DEEP2 survey thus far.
DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.
Wachinger, Christian; Reuter, Martin; Klein, Tassilo
2018-04-15
We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Morphology and astrometry of Infrared-Faint Radio Sources
NASA Astrophysics Data System (ADS)
Middelberg, Enno; Norris, Ray; Randall, Kate; Mao, Minnie; Hales, Christopher
2008-10-01
Infrared-Faint Radio Sources, or IFRS, are an unexpected class of object discovered in the Australia Telescope Large Area Survey, ATLAS. They are compact 1.4GHz radio sources with no visible counterparts in co-located (relatively shallow) Spitzer infrared and optical images. We have detected two of these objects with VLBI, indicating the presence of an AGN. These observations and our ATLAS data indicate that IFRS are extended on scales of arcseconds, and we wish to image their morphologies to obtain clues about their nature. These observations will also help us to select optical counterparts from very deep, and hence crowded, optical images which we have proposed. With these data in hand, we will be able to compare IFRS to known object types and to apply for spectroscopy to obtain their redshifts.
The Geomatics Contribution for the Valorisation Project in the Rocca of San Silvestro Landscape Site
NASA Astrophysics Data System (ADS)
Brocchini, D.; Chiabrando, F.; Colucci, E.; Sammartano, G.; Spanò, A.; Teppati Losè, L.; Villa, A.
2017-05-01
This paper proposes an emblematic project where several multi-sensor strategies for spatial data acquisition and management, range based and image based, were combined to create a series of integrated territorial and architectural scale products characterized by a rich multi-content nature. The work presented here was finalized in a test site that is composed by an ensemble of diversified cultural deposits; the objects that were surveyed and modelled range from the landscape with its widespread mining sites, the main tower with its defensive role, the urban configuration of the settlement, the building systems and techniques, a medieval mine. For this reason, the Rocca of San Silvestro represented a perfect test case, due to its complex and multi-stratified character. This archaeological site is a medieval fortified village near the municipality of Campiglia Marittima (LI), Italy. The Rocca is part of an Archaeological Mines Park and is included in the Parchi della Val di Cornia (a system of archaeological parks, natural parks and museums in the south-west of Tuscany). The fundamental role of a deep knowledge about a cultural artefact before the planning of a restoration and valorisation project is globally recognized; the qualitative and quantitative knowledge provided by geomatics techniques is part of this process. The paper will present the different techniques that were used, the products that were obtained and will focus on some mapping and WEB GIS applications and analyses that were performed and considerations that were made.
Hierarchical Context Modeling for Video Event Recognition.
Wang, Xiaoyang; Ji, Qiang
2016-10-11
Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.
Solar System Science with LSST
NASA Astrophysics Data System (ADS)
Jones, R. L.; Chesley, S. R.; Connolly, A. J.; Harris, A. W.; Ivezic, Z.; Knezevic, Z.; Kubica, J.; Milani, A.; Trilling, D. E.
2008-09-01
The Large Synoptic Survey Telescope (LSST) will provide a unique tool to study moving objects throughout the solar system, creating massive catalogs of Near Earth Objects (NEOs), asteroids, Trojans, TransNeptunian Objects (TNOs), comets and planetary satellites with well-measured orbits and high quality, multi-color photometry accurate to 0.005 magnitudes for the brightest objects. In the baseline LSST observing plan, back-to-back 15-second images will reach a limiting magnitude as faint as r=24.7 in each 9.6 square degree image, twice per night; a total of approximately 15,000 square degrees of the sky will be imaged in multiple filters every 3 nights. This time sampling will continue throughout each lunation, creating a huge database of observations. Fig. 1 Sky coverage of LSST over 10 years; separate panels for each of the 6 LSST filters. Color bars indicate number of observations in filter. The catalogs will include more than 80% of the potentially hazardous asteroids larger than 140m in diameter within the first 10 years of LSST operation, millions of main-belt asteroids and perhaps 20,000 Trans-Neptunian Objects. Objects with diameters as small as 100m in the Main Belt and <100km in the Kuiper Belt can be detected in individual images. Specialized `deep drilling' observing sequences will detect KBOs down to 10s of kilometers in diameter. Long period comets will be detected at larger distances than previously possible, constrainting models of the Oort cloud. With the large number of objects expected in the catalogs, it may be possible to observe a pristine comet start outgassing on its first journey into the inner solar system. By observing fields over a wide range of ecliptic longitudes and latitudes, including large separations from the ecliptic plane, not only will these catalogs greatly increase the numbers of known objects, the characterization of the inclination distributions of these populations will be much improved. Derivation of proper elements for main belt and Trojan asteroids will allow ever more resolution of asteroid families and their size-frequency distribution, as well as the study of the long-term dynamics of the individual asteroids and the asteroid belt as a whole. Fig. 2 Orbital parameters of Main Belt Asteroids, color-coded according to ugriz colors measured by SDSS. The figure to the left shows osculating elements, the figure to the right shows proper elements - note the asteroid families visible as clumps in parameter space [1]. By obtaining multi-color ugrizy data for a substantial fraction of objects, relationships between color and dynamical history can be established. This will also enable taxonomic classification of asteroids, provide further links between diverse populations such as irregular satellites and TNOs or planetary Trojans, and enable estimates of asteroid diameter with rms uncertainty of 30%. With the addition of light-curve information, rotation periods and phase curves can be measured for large fractions of each population, leading to new insight on physical characteristics. Photometric variability information, together with sparse lightcurve inversion, will allow spin state and shape estimation for up to two orders of magnitude more objects than presently known. This will leverage physical studies of asteroids by constraining the size-strength relationship, which has important implications for the internal structure (solid, fractured, rubble pile) and in turn the collisional evolution of the asteroid belt. Similar information can be gained for other solar system bodies. [1] Parker, A., Ivezic
Ghafoorian, Mohsen; Karssemeijer, Nico; Heskes, Tom; van Uden, Inge W M; Sanchez, Clara I; Litjens, Geert; de Leeuw, Frank-Erik; van Ginneken, Bram; Marchiori, Elena; Platel, Bram
2017-07-11
The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).
Deep g'r'i'z' GMOS Imaging of the Dwarf Irregular Galaxy Kar 50
NASA Astrophysics Data System (ADS)
Davidge, T. J.
2002-11-01
Images obtained with the Gemini Multi-Object Spectrograph (GMOS) are used to investigate the stellar content and distance of the dwarf irregular galaxy Kar 50. The brightest object is an H II region, and the bright stellar content is dominated by stars with g'-r'<0. The tips of the main sequence and the red giant branch (RGB) are tentatively identified near r'=24.9 and i'=25.5, respectively. The galaxy has a blue integrated color and no significant color gradient, and we conclude that Kar 50 has experienced a recent galaxy-wide episode of star formation. The distance estimated from the brightest blue stars indicates that Kar 50 is behind the M81 group, and this is consistent with the tentative RGB-tip brightness. Kar 50 has a remarkably flat central surface brightness profile, even at wavelengths approaching 1 μm, although there is no evidence of a bar. In the absence of another large star-forming episode, Kar 50 will evolve into a very low surface brightness galaxy. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Particle Physics and Astronomy Research Council (United Kingdom), the National Research Council of Canada (Canada), CONICYT (Chile), the Australian Research Council (Australia), CNPq (Brazil), and CONICET (Argentina).
Multiplicative mixing of object identity and image attributes in single inferior temporal neurons.
Ratan Murty, N Apurva; Arun, S P
2018-04-03
Object recognition is challenging because the same object can produce vastly different images, mixing signals related to its identity with signals due to its image attributes, such as size, position, rotation, etc. Previous studies have shown that both signals are present in high-level visual areas, but precisely how they are combined has remained unclear. One possibility is that neurons might encode identity and attribute signals multiplicatively so that each can be efficiently decoded without interference from the other. Here, we show that, in high-level visual cortex, responses of single neurons can be explained better as a product rather than a sum of tuning for object identity and tuning for image attributes. This subtle effect in single neurons produced substantially better population decoding of object identity and image attributes in the neural population as a whole. This property was absent both in low-level vision models and in deep neural networks. It was also unique to invariances: when tested with two-part objects, neural responses were explained better as a sum than as a product of part tuning. Taken together, our results indicate that signals requiring separate decoding, such as object identity and image attributes, are combined multiplicatively in IT neurons, whereas signals that require integration (such as parts in an object) are combined additively. Copyright © 2018 the Author(s). Published by PNAS.
Processing of chromatic information in a deep convolutional neural network.
Flachot, Alban; Gegenfurtner, Karl R
2018-04-01
Deep convolutional neural networks are a class of machine-learning algorithms capable of solving non-trivial tasks, such as object recognition, with human-like performance. Little is known about the exact computations that deep neural networks learn, and to what extent these computations are similar to the ones performed by the primate brain. Here, we investigate how color information is processed in the different layers of the AlexNet deep neural network, originally trained on object classification of over 1.2M images of objects in their natural contexts. We found that the color-responsive units in the first layer of AlexNet learned linear features and were broadly tuned to two directions in color space, analogously to what is known of color responsive cells in the primate thalamus. Moreover, these directions are decorrelated and lead to statistically efficient representations, similar to the cardinal directions of the second-stage color mechanisms in primates. We also found, in analogy to the early stages of the primate visual system, that chromatic and achromatic information were segregated in the early layers of the network. Units in the higher layers of AlexNet exhibit on average a lower responsivity for color than units at earlier stages.
Imaging with a small number of photons
Morris, Peter A.; Aspden, Reuben S.; Bell, Jessica E. C.; Boyd, Robert W.; Padgett, Miles J.
2015-01-01
Low-light-level imaging techniques have application in many diverse fields, ranging from biological sciences to security. A high-quality digital camera based on a multi-megapixel array will typically record an image by collecting of order 105 photons per pixel, but by how much could this photon flux be reduced? In this work we demonstrate a single-photon imaging system based on a time-gated intensified camera from which the image of an object can be inferred from very few detected photons. We show that a ghost-imaging configuration, where the image is obtained from photons that have never interacted with the object, is a useful approach for obtaining images with high signal-to-noise ratios. The use of heralded single photons ensures that the background counts can be virtually eliminated from the recorded images. By applying principles of image compression and associated image reconstruction, we obtain high-quality images of objects from raw data formed from an average of fewer than one detected photon per image pixel. PMID:25557090