CNN universal machine as classificaton platform: an art-like clustering algorithm.
Bálya, David
2003-12-01
Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.
Li, Siqi; Jiang, Huiyan; Pang, Wenbo
2017-05-01
Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rachmadi, Muhammad Febrian; Valdés-Hernández, Maria Del C; Agan, Maria Leonora Fatimah; Di Perri, Carol; Komura, Taku
2018-06-01
We propose an adaptation of a convolutional neural network (CNN) scheme proposed for segmenting brain lesions with considerable mass-effect, to segment white matter hyperintensities (WMH) characteristic of brains with none or mild vascular pathology in routine clinical brain magnetic resonance images (MRI). This is a rather difficult segmentation problem because of the small area (i.e., volume) of the WMH and their similarity to non-pathological brain tissue. We investigate the effectiveness of the 2D CNN scheme by comparing its performance against those obtained from another deep learning approach: Deep Boltzmann Machine (DBM), two conventional machine learning approaches: Support Vector Machine (SVM) and Random Forest (RF), and a public toolbox: Lesion Segmentation Tool (LST), all reported to be useful for segmenting WMH in MRI. We also introduce a way to incorporate spatial information in convolution level of CNN for WMH segmentation named global spatial information (GSI). Analysis of covariance corroborated known associations between WMH progression, as assessed by all methods evaluated, and demographic and clinical data. Deep learning algorithms outperform conventional machine learning algorithms by excluding MRI artefacts and pathologies that appear similar to WMH. Our proposed approach of incorporating GSI also successfully helped CNN to achieve better automatic WMH segmentation regardless of network's settings tested. The mean Dice Similarity Coefficient (DSC) values for LST-LGA, SVM, RF, DBM, CNN and CNN-GSI were 0.2963, 0.1194, 0.1633, 0.3264, 0.5359 and 5389 respectively. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.
Wang, Hongkai; Zhou, Zongwei; Li, Yingci; Chen, Zhonghua; Lu, Peiou; Wang, Wenzhi; Liu, Wanyu; Yu, Lijuan
2017-12-01
This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18 F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Wavelet-enhanced convolutional neural network: a new idea in a deep learning paradigm.
Savareh, Behrouz Alizadeh; Emami, Hassan; Hajiabadi, Mohamadreza; Azimi, Seyed Majid; Ghafoori, Mahyar
2018-05-29
Manual brain tumor segmentation is a challenging task that requires the use of machine learning techniques. One of the machine learning techniques that has been given much attention is the convolutional neural network (CNN). The performance of the CNN can be enhanced by combining other data analysis tools such as wavelet transform. In this study, one of the famous implementations of CNN, a fully convolutional network (FCN), was used in brain tumor segmentation and its architecture was enhanced by wavelet transform. In this combination, a wavelet transform was used as a complementary and enhancing tool for CNN in brain tumor segmentation. Comparing the performance of basic FCN architecture against the wavelet-enhanced form revealed a remarkable superiority of enhanced architecture in brain tumor segmentation tasks. Using mathematical functions and enhancing tools such as wavelet transform and other mathematical functions can improve the performance of CNN in any image processing task such as segmentation and classification.
World of intelligence defense object detection-machine learning (artificial intelligence)
NASA Astrophysics Data System (ADS)
Gupta, Anitya; Kumar, Akhilesh; Bhushan, Vinayak
2018-04-01
This paper proposes a Quick Locale based Convolutional System strategy (Quick R-CNN) for question recognition. Quick R-CNN expands on past work to effectively characterize ob-ject recommendations utilizing profound convolutional systems. Com-pared to past work, Quick R-CNN utilizes a few in-novations to enhance preparing and testing speed while likewise expanding identification precision. Quick R-CNN trains the profound VGG16 arrange 9 quicker than R-CNN, is 213 speedier at test-time, and accomplishes a higher Guide on PASCAL VOC 2012. Contrasted with SPPnet, Quick R-CNN trains VGG16 3 quicker, tests 10 speedier, and is more exact. Quick R-CNN is actualized in Python and C++ (utilizing Caffe) and is accessible under the open-source MIT Permit.
Evaluation of CNN as anthropomorphic model observer
NASA Astrophysics Data System (ADS)
Massanes, Francesc; Brankov, Jovan G.
2017-03-01
Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.
Metaheuristic Algorithms for Convolution Neural Network
Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Metaheuristic Algorithms for Convolution Neural Network.
Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).
A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification
NASA Astrophysics Data System (ADS)
Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.
2018-06-01
The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.
Classification of crystal structure using a convolutional neural network
Park, Woon Bae; Chung, Jiyong; Sohn, Keemin; Pyo, Myoungho
2017-01-01
A deep machine-learning technique based on a convolutional neural network (CNN) is introduced. It has been used for the classification of powder X-ray diffraction (XRD) patterns in terms of crystal system, extinction group and space group. About 150 000 powder XRD patterns were collected and used as input for the CNN with no handcrafted engineering involved, and thereby an appropriate CNN architecture was obtained that allowed determination of the crystal system, extinction group and space group. In sharp contrast with the traditional use of powder XRD pattern analysis, the CNN never treats powder XRD patterns as a deconvoluted and discrete peak position or as intensity data, but instead the XRD patterns are regarded as nothing but a pattern similar to a picture. The CNN interprets features that humans cannot recognize in a powder XRD pattern. As a result, accuracy levels of 81.14, 83.83 and 94.99% were achieved for the space-group, extinction-group and crystal-system classifications, respectively. The well trained CNN was then used for symmetry identification of unknown novel inorganic compounds. PMID:28875035
Classification of crystal structure using a convolutional neural network.
Park, Woon Bae; Chung, Jiyong; Jung, Jaeyoung; Sohn, Keemin; Singh, Satendra Pal; Pyo, Myoungho; Shin, Namsoo; Sohn, Kee-Sun
2017-07-01
A deep machine-learning technique based on a convolutional neural network (CNN) is introduced. It has been used for the classification of powder X-ray diffraction (XRD) patterns in terms of crystal system, extinction group and space group. About 150 000 powder XRD patterns were collected and used as input for the CNN with no handcrafted engineering involved, and thereby an appropriate CNN architecture was obtained that allowed determination of the crystal system, extinction group and space group. In sharp contrast with the traditional use of powder XRD pattern analysis, the CNN never treats powder XRD patterns as a deconvoluted and discrete peak position or as intensity data, but instead the XRD patterns are regarded as nothing but a pattern similar to a picture. The CNN interprets features that humans cannot recognize in a powder XRD pattern. As a result, accuracy levels of 81.14, 83.83 and 94.99% were achieved for the space-group, extinction-group and crystal-system classifications, respectively. The well trained CNN was then used for symmetry identification of unknown novel inorganic compounds.
Medical image processing using neural networks based on multivalued and universal binary neurons
NASA Astrophysics Data System (ADS)
Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.
1998-06-01
Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.
Three-Class Mammogram Classification Based on Descriptive CNN Features
Zhang, Qianni; Jadoon, Adeel
2017-01-01
In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461
Three-Class Mammogram Classification Based on Descriptive CNN Features.
Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel
2017-01-01
In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.
Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation
NASA Astrophysics Data System (ADS)
Zhou, XueFei
2018-04-01
With the development of computer technology, the applications of machine learning are more and more extensive. And machine learning is providing endless opportunities to develop new applications. One of those applications is image recognition by using Convolutional Neural Networks (CNNs). CNN is one of the most common algorithms in image recognition. It is significant to understand its theory and structure for every scholar who is interested in this field. CNN is mainly used in computer identification, especially in voice, text recognition and other aspects of the application. It utilizes hierarchical structure with different layers to accelerate computing speed. In addition, the greatest features of CNNs are the weight sharing and dimension reduction. And all of these consolidate the high effectiveness and efficiency of CNNs with idea computing speed and error rate. With the help of other learning altruisms, CNNs could be used in several scenarios for machine learning, especially for deep learning. Based on the general introduction to the background and the core solution CNN, this paper is going to focus on summarizing how Gradient Descent and Backpropagation work, and how they contribute to the high performances of CNNs. Also, some practical applications will be discussed in the following parts. The last section exhibits the conclusion and some perspectives of future work.
NASA Astrophysics Data System (ADS)
Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.
2017-09-01
Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.
NASA Astrophysics Data System (ADS)
Kestur, Ramesh; Farooq, Shariq; Abdal, Rameen; Mehraj, Emad; Narasipura, Omkar; Mudigere, Meenavathi
2018-01-01
Road extraction in imagery acquired by low altitude remote sensing (LARS) carried out using an unmanned aerial vehicle (UAV) is presented. LARS is carried out using a fixed wing UAV with a high spatial resolution vision spectrum (RGB) camera as the payload. Deep learning techniques, particularly fully convolutional network (FCN), are adopted to extract roads by dense semantic segmentation. The proposed model, UFCN (U-shaped FCN) is an FCN architecture, which is comprised of a stack of convolutions followed by corresponding stack of mirrored deconvolutions with the usage of skip connections in between for preserving the local information. The limited dataset (76 images and their ground truths) is subjected to real-time data augmentation during training phase to increase the size effectively. Classification performance is evaluated using precision, recall, accuracy, F1 score, and brier score parameters. The performance is compared with support vector machine (SVM) classifier, a one-dimensional convolutional neural network (1D-CNN) model, and a standard two-dimensional CNN (2D-CNN). The UFCN model outperforms the SVM, 1D-CNN, and 2D-CNN models across all the performance parameters. Further, the prediction time of the proposed UFCN model is comparable with SVM, 1D-CNN, and 2D-CNN models.
Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models.
AlDahoul, Nouar; Md Sabri, Aznul Qalid; Mansoor, Ali Mohammed
2018-01-01
Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
Using CNN Newsroom in Advanced Listening Classes.
ERIC Educational Resources Information Center
Vann, Samuel
A university teacher of English as a Second Language describes the use of CNN Newsroom materials to teach listening skills. The basic news broadcast materials, including video and audio tapes, are provided by CNN, and have been developed by the teacher into instructional units. A classroom guide is available on the Internet. The instruction is…
Cross-Modal Retrieval With CNN Visual Features: A New Baseline.
Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng
2017-02-01
Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.
Diverse Region-Based CNN for Hyperspectral Image Classification.
Zhang, Mengmeng; Li, Wei; Du, Qian
2018-06-01
Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.
Younghak Shin; Balasingham, Ilangko
2017-07-01
Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.
NASA Astrophysics Data System (ADS)
Anding, K.; Kuritcyn, P.; Garten, D.
2016-11-01
In this paper a new method for the automatic visual inspection of metallic surfaces is proposed by using Convolutional Neural Networks (CNN). The different combinations of network parameters were developed and tested. The obtained results of CNN were analysed and compared with the results of our previous investigations with color and texture features as input parameters for a Support Vector Machine. Advantages and disadvantages of the different classifying methods are explained.
NASA Astrophysics Data System (ADS)
Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun
2018-01-01
Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.
Zhu, Qile; Li, Xiaolin; Conesa, Ana; Pereira, Cécile
2018-05-01
Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models. We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems. The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN. andyli@ece.ufl.edu or aconesa@ufl.edu. Supplementary data are available at Bioinformatics online.
Zhu, Qile; Li, Xiaolin; Conesa, Ana; Pereira, Cécile
2018-01-01
Abstract Motivation Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models. Results We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems. Availability and implementation The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN. Contact andyli@ece.ufl.edu or aconesa@ufl.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:29272325
NASA Astrophysics Data System (ADS)
Hannel, Mark D.; Abdulali, Aidan; O'Brien, Michael; Grier, David G.
2018-06-01
Holograms of colloidal particles can be analyzed with the Lorenz-Mie theory of light scattering to measure individual particles' three-dimensional positions with nanometer precision while simultaneously estimating their sizes and refractive indexes. Extracting this wealth of information begins by detecting and localizing features of interest within individual holograms. Conventionally approached with heuristic algorithms, this image analysis problem can be solved faster and more generally with machine-learning techniques. We demonstrate that two popular machine-learning algorithms, cascade classifiers and deep convolutional neural networks (CNN), can solve the feature-localization problem orders of magnitude faster than current state-of-the-art techniques. Our CNN implementation localizes holographic features precisely enough to bootstrap more detailed analyses based on the Lorenz-Mie theory of light scattering. The wavelet-based Haar cascade proves to be less precise, but is so computationally efficient that it creates new opportunities for applications that emphasize speed and low cost. We demonstrate its use as a real-time targeting system for holographic optical trapping.
Khellal, Atmane; Ma, Hongbin; Fei, Qing
2018-05-09
The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.
Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque
2017-01-01
Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).
Deep Learning Methods for Underwater Target Feature Extraction and Recognition
Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang
2018-01-01
The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407
Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho
2017-03-01
Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
NetVLAD: CNN Architecture for Weakly Supervised Place Recognition.
Arandjelovic, Relja; Gronat, Petr; Torii, Akihiko; Pajdla, Tomas; Sivic, Josef
2018-06-01
We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.
NASA Astrophysics Data System (ADS)
Pereira, Carina; Dighe, Manjiri; Alessio, Adam M.
2018-02-01
Various Computer Aided Diagnosis (CAD) systems have been developed that characterize thyroid nodules using the features extracted from the B-mode ultrasound images and Shear Wave Elastography images (SWE). These features, however, are not perfect predictors of malignancy. In other domains, deep learning techniques such as Convolutional Neural Networks (CNNs) have outperformed conventional feature extraction based machine learning approaches. In general, fully trained CNNs require substantial volumes of data, motivating several efforts to use transfer learning with pre-trained CNNs. In this context, we sought to compare the performance of conventional feature extraction, fully trained CNNs, and transfer learning based, pre-trained CNNs for the detection of thyroid malignancy from ultrasound images. We compared these approaches applied to a data set of 964 B-mode and SWE images from 165 patients. The data were divided into 80% training/validation and 20% testing data. The highest accuracies achieved on the testing data for the conventional feature extraction, fully trained CNN, and pre-trained CNN were 0.80, 0.75, and 0.83 respectively. In this application, classification using a pre-trained network yielded the best performance, potentially due to the relatively limited sample size and sub-optimal architecture for the fully trained CNN.
Lin, Chin; Hsu, Chia-Jung; Lou, Yu-Sheng; Yeh, Shih-Jen; Lee, Chia-Cheng; Su, Sui-Lung; Chen, Hsiang-Cheng
2017-11-06
Automated disease code classification using free-text medical information is important for public health surveillance. However, traditional natural language processing (NLP) pipelines are limited, so we propose a method combining word embedding with a convolutional neural network (CNN). Our objective was to compare the performance of traditional pipelines (NLP plus supervised machine learning models) with that of word embedding combined with a CNN in conducting a classification task identifying International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codes in discharge notes. We used 2 classification methods: (1) extracting from discharge notes some features (terms, n-gram phrases, and SNOMED CT categories) that we used to train a set of supervised machine learning models (support vector machine, random forests, and gradient boosting machine), and (2) building a feature matrix, by a pretrained word embedding model, that we used to train a CNN. We used these methods to identify the chapter-level ICD-10-CM diagnosis codes in a set of discharge notes. We conducted the evaluation using 103,390 discharge notes covering patients hospitalized from June 1, 2015 to January 31, 2017 in the Tri-Service General Hospital in Taipei, Taiwan. We used the receiver operating characteristic curve as an evaluation measure, and calculated the area under the curve (AUC) and F-measure as the global measure of effectiveness. In 5-fold cross-validation tests, our method had a higher testing accuracy (mean AUC 0.9696; mean F-measure 0.9086) than traditional NLP-based approaches (mean AUC range 0.8183-0.9571; mean F-measure range 0.5050-0.8739). A real-world simulation that split the training sample and the testing sample by date verified this result (mean AUC 0.9645; mean F-measure 0.9003 using the proposed method). Further analysis showed that the convolutional layers of the CNN effectively identified a large number of keywords and automatically extracted enough concepts to predict the diagnosis codes. Word embedding combined with a CNN showed outstanding performance compared with traditional methods, needing very little data preprocessing. This shows that future studies will not be limited by incomplete dictionaries. A large amount of unstructured information from free-text medical writing will be extracted by automated approaches in the future, and we believe that the health care field is about to enter the age of big data. ©Chin Lin, Chia-Jung Hsu, Yu-Sheng Lou, Shih-Jen Yeh, Chia-Cheng Lee, Sui-Lung Su, Hsiang-Cheng Chen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.11.2017.
A CNN based Hybrid approach towards automatic image registration
NASA Astrophysics Data System (ADS)
Arun, Pattathal V.; Katiyar, Sunil K.
2013-06-01
Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling. Rejestracja obrazu jest kluczowym składnikiem różnych operacji jego przetwarzania. W ostatnich latach do automatycznej rejestracji obrazu wykorzystuje się metody sztucznej inteligencji, których największą wadą, obniżającą dokładność uzyskanych wyników jest brak możliwości dobrego wymodelowania kształtu i informacji kontekstowych. W niniejszej pracy zaproponowano zasady dokładnego modelowania kształtu oraz adaptacyjnego resamplingu z wykorzystaniem zaawansowanych technik, takich jak Vector Machines (VM), komórkowa sieć neuronowa (CNN), przesiewanie (SIFT), Coreset i automaty komórkowe. Stwierdzono, że za pomocą CNN można skutecznie poprawiać dopasowanie obiektów obrazowych oraz resampling kolejnych kroków rejestracji, zaś zastosowanie optymalizacji metodą Coreset znacznie redukuje złożoność podejścia. Zasadniczym przedmiotem pracy są: optymalizacja punktów metodą SIFT oparta na podejściu CNN, adaptacyjny resampling oraz inteligentne modelowanie obiektów. Opracowana metoda została porównana ze współcześnie stosowanymi metodami wykorzystującymi różne miary statystyczne. Badania nad różnymi obrazami satelitarnymi wykazały, że stosując opracowane podejście osiągnięto bardzo dobre wyniki. System stosując podejście CNN-prolog dynamicznie wykorzystuje informacje spektralne i przestrzenne dla reprezentacji wiedzy kontekstowej. Metoda okazała się również skuteczna w dostarczaniu inteligentnych interpretacji i w adaptacyjnym resamplingu.
CNN Newsroom Classroom Guides. May 1-31, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of May, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: Chelsea Clinton decides to attend Stanford University, Zaire's president and rebel…
Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa
2017-03-01
This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.
Convolutional neural networks with balanced batches for facial expressions recognition
NASA Astrophysics Data System (ADS)
Battini Sönmez, Elena; Cangelosi, Angelo
2017-03-01
This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.
Haenssle, H A; Fink, C; Schneiderbauer, R; Toberer, F; Buhl, T; Blum, A; Kalloo, A; Hassen, A Ben Hadj; Thomas, L; Enk, A; Uhlmann, L
2018-05-28
Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN's diagnostic performance to larger groups of dermatologists are lacking. Google's Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists' diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN's performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge. In level-I dermatologists achieved a mean (±standard deviation) sensitivity and specificity for lesion classification of 86.6% (±9.3%) and 71.3% (±11.2%), respectively. More clinical information (level-II) improved the sensitivity to 88.9% (±9.6%, P = 0.19) and specificity to 75.7% (±11.7%, P < 0.05). The CNN ROC curve revealed a higher specificity of 82.5% when compared with dermatologists in level-I (71.3%, P < 0.01) and level-II (75.7%, P < 0.01) at their sensitivities of 86.6% and 88.9%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P < 0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge. For the first time we compared a CNN's diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians' experience, they may benefit from assistance by a CNN's image classification. This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; https://www.drks.de/drks_web/).
Urban, Gregor; Tripathi, Priyam; Alkayali, Talal; Mittal, Mohit; Jalali, Farid; Karnes, William; Baldi, Pierre
2018-06-18
The benefit of colonoscopy for colorectal cancer prevention depends on the adenoma detection rate (ADR). The ADR should reflect adenoma prevalence rate, estimated to be greater than 50% among the screening-age population. Yet the rate of adenoma detection by colonoscopists varies from 7% to 53%. It is estimated that every 1% increase in ADR reduces the risk of interval colorectal cancers by 3-6%. New strategies are needed to increase the ADR during colonoscopy. We tested the ability of computer-assisted image analysis, with convolutional neural networks (a deep learning model for image analysis), to improve polyp detection, a surrogate of ADR. We designed and trained deep convolutional neural networks (CNN) to detect polyps using a diverse and representative set of 8641 hand labeled images from screening colonoscopies collected from over 2000 patients. We tested the models on 20 colonoscopy videos with a total duration of 5 hours. Expert colonoscopists were asked to identify all polyps in 9 de-identified colonoscopy videos, selected from archived video studies, either with or without benefit of the CNN overlay. Their findings were compared with those of the CNN, using CNN-assisted expert review as the reference. When tested on manually labeled images, the CNN identified polyps with an area under the receiver operating characteristic curve (ROC-AUC) of 0.991 and an accuracy of 96.4%. In the analysis of colonoscopy videos in which 28 polyps were removed, 4 expert reviewers identified 8 additional polyps without CNN assistance that had not been removed and identified an additional 17 polyps with CNN assistance (45 in total). All polyps removed and identified by expert review were detected by the CNN. The CNN had a false-positive rate of 7%. In a set of 8641 colonoscopy images containing 4088 unique polyps the CNN identified polyps with a cross-validation accuracy of 96.4% and ROC-AUC value of 0.991. The CNN system can detect and localize polyps well within real-time constraints using an ordinary desktop machine with a contemporary graphics processing unit. This system could increase ADR and reduce interval colorectal cancers but requires validation in large multicenter trials. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus
2017-05-01
For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.
A comparison study between MLP and convolutional neural network models for character recognition
NASA Astrophysics Data System (ADS)
Ben Driss, S.; Soua, M.; Kachouri, R.; Akil, M.
2017-05-01
Optical Character Recognition (OCR) systems have been designed to operate on text contained in scanned documents and images. They include text detection and character recognition in which characters are described then classified. In the classification step, characters are identified according to their features or template descriptions. Then, a given classifier is employed to identify characters. In this context, we have proposed the unified character descriptor (UCD) to represent characters based on their features. Then, matching was employed to ensure the classification. This recognition scheme performs a good OCR Accuracy on homogeneous scanned documents, however it cannot discriminate characters with high font variation and distortion.3 To improve recognition, classifiers based on neural networks can be used. The multilayer perceptron (MLP) ensures high recognition accuracy when performing a robust training. Moreover, the convolutional neural network (CNN), is gaining nowadays a lot of popularity for its high performance. Furthermore, both CNN and MLP may suffer from the large amount of computation in the training phase. In this paper, we establish a comparison between MLP and CNN. We provide MLP with the UCD descriptor and the appropriate network configuration. For CNN, we employ the convolutional network designed for handwritten and machine-printed character recognition (Lenet-5) and we adapt it to support 62 classes, including both digits and characters. In addition, GPU parallelization is studied to speed up both of MLP and CNN classifiers. Based on our experimentations, we demonstrate that the used real-time CNN is 2x more relevant than MLP when classifying characters.
Deep learning analyzes Helicobacter pylori infection by upper gastrointestinal endoscopy images.
Itoh, Takumi; Kawahira, Hiroshi; Nakashima, Hirotaka; Yata, Noriko
2018-02-01
Helicobacter pylori (HP)-associated chronic gastritis can cause mucosal atrophy and intestinal metaplasia, both of which increase the risk of gastric cancer. The accurate diagnosis of HP infection during routine medical checks is important. We aimed to develop a convolutional neural network (CNN), which is a machine-learning algorithm similar to deep learning, capable of recognizing specific features of gastric endoscopy images. The goal behind developing such a system was to detect HP infection early, thus preventing gastric cancer. For the development of the CNN, we used 179 upper gastrointestinal endoscopy images obtained from 139 patients (65 were HP-positive: ≥ 10 U/mL and 74 were HP-negative: < 3 U/mL on HP IgG antibody assessment). Of the 179 images, 149 were used as training images, and the remaining 30 (15 from HP-negative patients and 15 from HP-positive patients) were set aside to be used as test images. The 149 training images were subjected to data augmentation, which yielded 596 images. We used the CNN to create a learning tool that would recognize HP infection and assessed the decision accuracy of the CNN with the 30 test images by calculating the sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC). The sensitivity and specificity of the CNN for the detection of HP infection were 86.7 % and 86.7 %, respectively, and the AUC was 0.956. CNN-aided diagnosis of HP infection seems feasible and is expected to facilitate and improve diagnosis during health check-ups.
NASA Astrophysics Data System (ADS)
Olory Agomma, R.; Vázquez, C.; Cresson, T.; De Guise, J.
2018-02-01
Most algorithms to detect and identify anatomical structures in medical images require either to be initialized close to the target structure, or to know that the structure is present in the image, or to be trained on a homogeneous database (e.g. all full body or all lower limbs). Detecting these structures when there is no guarantee that the structure is present in the image, or when the image database is heterogeneous (mixed configurations), is a challenge for automatic algorithms. In this work we compared two state-of-the-art machine learning techniques in order to determine which one is the most appropriate for predicting targets locations based on image patches. By knowing the position of thirteen landmarks points, labelled by an expert in EOS frontal radiography, we learn the displacement between salient points detected in the image and these thirteen landmarks. The learning step is carried out with a machine learning approach by exploring two methods: Convolutional Neural Network (CNN) and Random Forest (RF). The automatic detection of the thirteen landmarks points in a new image is then obtained by averaging the positions of each one of these thirteen landmarks estimated from all the salient points in the new image. We respectively obtain for CNN and RF, an average prediction error (both mean and standard deviation in mm) of 29 +/-18 and 30 +/- 21 for the thirteen landmarks points, indicating the approximate location of anatomical regions. On the other hand, the learning time is 9 days for CNN versus 80 minutes for RF. We provide a comparison of the results between the two machine learning approaches.
Hazardous gas detection for FTIR-based hyperspectral imaging system using DNN and CNN
NASA Astrophysics Data System (ADS)
Kim, Yong Chan; Yu, Hyeong-Geun; Lee, Jae-Hoon; Park, Dong-Jo; Nam, Hyun-Woo
2017-10-01
Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.
Chinese Sentence Classification Based on Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Gu, Chengwei; Wu, Ming; Zhang, Chuang
2017-10-01
Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.
Mining key elements for severe convection prediction based on CNN
NASA Astrophysics Data System (ADS)
Liu, Ming; Pan, Ning; Zhang, Changan; Sha, Hongzhou; Zhang, Bolei; Liu, Liang; Zhang, Meng
2017-04-01
Severe convective weather is a kind of weather disasters accompanied by heavy rainfall, gust wind, hail, etc. Along with recent developments on remote sensing and numerical modeling, there are high-volume and long-term observational and modeling data accumulated to capture massive severe convective events over particular areas and time periods. With those high-volume and high-variety weather data, most of the existing studies and methods carry out the dynamical laws, cause analysis, potential rule study, and prediction enhancement by utilizing the governing equations from fluid dynamics and thermodynamics. In this study, a key-element mining method is proposed for severe convection prediction based on convolution neural network (CNN). It aims to identify the key areas and key elements from huge amounts of historical weather data including conventional measurements, weather radar, satellite, so as numerical modeling and/or reanalysis data. Under this manner, the machine-learning based method could help the human forecasters on their decision-making on operational weather forecasts on severe convective weathers by extracting key information from the real-time and historical weather big data. In this paper, it first utilizes computer vision technology to complete the data preprocessing work of the meteorological variables. Then, it utilizes the information such as radar map and expert knowledge to annotate all images automatically. And finally, by using CNN model, it cloud analyze and evaluate each weather elements (e.g., particular variables, patterns, features, etc.), and identify key areas of those critical weather elements, then help forecasters quickly screen out the key elements from huge amounts of observation data by current weather conditions. Based on the rich weather measurement and model data (up to 10 years) over Fujian province in China, where the severe convective weathers are very active during the summer months, experimental tests are conducted with the new machine-learning method via CNN models. Based on the analysis of those experimental results and case studies, the proposed new method have below benefits for the severe convection prediction: (1) helping forecasters to narrow down the scope of analysis and saves lead-time for those high-impact severe convection; (2) performing huge amount of weather big data by machine learning methods rather relying on traditional theory and knowledge, which provide new method to explore and quantify the severe convective weathers; (3) providing machine learning based end-to-end analysis and processing ability with considerable scalability on data volumes, and accomplishing the analysis work without human intervention.
Zhang, Jianhua; Li, Sunan; Wang, Rubin
2017-01-01
In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Jiang, Ling; Han, Lei
2018-04-01
Convective storm nowcasting refers to the prediction of the convective weather initiation, development, and decay in a very short term (typically 0 2 h) .Despite marked progress over the past years, severe convective storm nowcasting still remains a challenge. With the boom of machine learning, it has been well applied in various fields, especially convolutional neural network (CNN). In this paper, we build a servere convective weather nowcasting system based on CNN and hidden Markov model (HMM) using reanalysis meteorological data. The goal of convective storm nowcasting is to predict if there is a convective storm in 30min. In this paper, we compress the VDRAS reanalysis data to low-dimensional data by CNN as the observation vector of HMM, then obtain the development trend of strong convective weather in the form of time series. It shows that, our method can extract robust features without any artificial selection of features, and can capture the development trend of strong convective storm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta
This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential forreducing or removingother artifacts caused by instrument instability, detector non-linearity,etc. An open-source toolbox, which integratesmore » the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.« less
Intelligent Machine Learning Approaches for Aerospace Applications
NASA Astrophysics Data System (ADS)
Sathyan, Anoop
Machine Learning is a type of artificial intelligence that provides machines or networks the ability to learn from data without the need to explicitly program them. There are different kinds of machine learning techniques. This thesis discusses the applications of two of these approaches: Genetic Fuzzy Logic and Convolutional Neural Networks (CNN). Fuzzy Logic System (FLS) is a powerful tool that can be used for a wide variety of applications. FLS is a universal approximator that reduces the need for complex mathematics and replaces it with expert knowledge of the system to produce an input-output mapping using If-Then rules. The expert knowledge of a system can help in obtaining the parameters for small-scale FLSs, but for larger networks we will need to use sophisticated approaches that can automatically train the network to meet the design requirements. This is where Genetic Algorithms (GA) and EVE come into the picture. Both GA and EVE can tune the FLS parameters to minimize a cost function that is designed to meet the requirements of the specific problem. EVE is an artificial intelligence developed by Psibernetix that is trained to tune large scale FLSs. The parameters of an FLS can include the membership functions and rulebase of the inherent Fuzzy Inference Systems (FISs). The main issue with using the GFS is that the number of parameters in a FIS increase exponentially with the number of inputs thus making it increasingly harder to tune them. To reduce this issue, the FLSs discussed in this thesis consist of 2-input-1-output FISs in cascade (Chapter 4) or as a layer of parallel FISs (Chapter 7). We have obtained extremely good results using GFS for different applications at a reduced computational cost compared to other algorithms that are commonly used to solve the corresponding problems. In this thesis, GFSs have been designed for controlling an inverted double pendulum, a task allocation problem of clustering targets amongst a set of UAVs, a fire detection problem and the aircraft conflict resolution problem. During the last decade, CNNs have become increasingly popular in the domain of image and speech processing. CNNs have a lot more parameters compared to GFSs that are tuned using the back-propagation algorithm. CNNs typically have hundreds of thousands or maybe millions of parameters that are tuned using common cost functions such as integral squared error, softmax loss etc. Chapter 5 discusses a classification problem to classify images as humans or not and Chapter 6 discusses a regression task using CNN for producing an approximate near-optimal route for the Traveling Salesman Problem (TSP) which is regarded as one of the most complicated decision making problem. Both the GFS and CNN are used to develop intelligent systems specific to the application providing them computational efficiency, robustness in the face of uncertainties and scalability.
NASA Astrophysics Data System (ADS)
Jenuwine, Natalia M.; Mahesh, Sunny N.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Early detection of lung nodules from CT scans is key to improving lung cancer treatment, but poses a significant challenge for radiologists due to the high throughput required of them. Computer-Aided Detection (CADe) systems aim to automatically detect these nodules with computer algorithms, thus improving diagnosis. These systems typically use a candidate selection step, which identifies all objects that resemble nodules, followed by a machine learning classifier which separates true nodules from false positives. We create a CADe system that uses a 3D convolutional neural network (CNN) to detect nodules in CT scans without a candidate selection step. Using data from the LIDC database, we train a 3D CNN to analyze subvolumes from anywhere within a CT scan and output the probability that each subvolume contains a nodule. Once trained, we apply our CNN to detect nodules from entire scans, by systematically dividing the scan into overlapping subvolumes which we input into the CNN to obtain the corresponding probabilities. By enabling our network to process an entire scan, we expect to streamline the detection process while maintaining its effectiveness. Our results imply that with continued training using an iterative training scheme, the one-step approach has the potential to be highly effective.
Chedjou, Jean Chamberlain; Kyamakya, Kyandoghere
2015-04-01
This paper develops and validates a comprehensive and universally applicable computational concept for solving nonlinear differential equations (NDEs) through a neurocomputing concept based on cellular neural networks (CNNs). High-precision, stability, convergence, and lowest-possible memory requirements are ensured by the CNN processor architecture. A significant challenge solved in this paper is that all these cited computing features are ensured in all system-states (regular or chaotic ones) and in all bifurcation conditions that may be experienced by NDEs.One particular quintessence of this paper is to develop and demonstrate a solver concept that shows and ensures that CNN processors (realized either in hardware or in software) are universal solvers of NDE models. The solving logic or algorithm of given NDEs (possible examples are: Duffing, Mathieu, Van der Pol, Jerk, Chua, Rössler, Lorenz, Burgers, and the transport equations) through a CNN processor system is provided by a set of templates that are computed by our comprehensive templates calculation technique that we call nonlinear adaptive optimization. This paper is therefore a significant contribution and represents a cutting-edge real-time computational engineering approach, especially while considering the various scientific and engineering applications of this ultrafast, energy-and-memory-efficient, and high-precise NDE solver concept. For illustration purposes, three NDE models are demonstratively solved, and related CNN templates are derived and used: the periodically excited Duffing equation, the Mathieu equation, and the transport equation.
NASA Astrophysics Data System (ADS)
Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong
2017-10-01
Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
Brain tumor segmentation with Deep Neural Networks.
Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo
2017-01-01
In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pasquet-Itam, J.; Pasquet, J.
2018-04-01
We have applied a convolutional neural network (CNN) to classify and detect quasars in the Sloan Digital Sky Survey Stripe 82 and also to predict the photometric redshifts of quasars. The network takes the variability of objects into account by converting light curves into images. The width of the images, noted w, corresponds to the five magnitudes ugriz and the height of the images, noted h, represents the date of the observation. The CNN provides good results since its precision is 0.988 for a recall of 0.90, compared to a precision of 0.985 for the same recall with a random forest classifier. Moreover 175 new quasar candidates are found with the CNN considering a fixed recall of 0.97. The combination of probabilities given by the CNN and the random forest makes good performance even better with a precision of 0.99 for a recall of 0.90. For the redshift predictions, the CNN presents excellent results which are higher than those obtained with a feature extraction step and different classifiers (a K-nearest-neighbors, a support vector machine, a random forest and a Gaussian process classifier). Indeed, the accuracy of the CNN within |Δz| < 0.1 can reach 78.09%, within |Δz| < 0.2 reaches 86.15%, within |Δz| < 0.3 reaches 91.2% and the value of root mean square (rms) is 0.359. The performance of the KNN decreases for the three |Δz| regions, since within the accuracy of |Δz| < 0.1, |Δz| < 0.2, and |Δz| < 0.3 is 73.72%, 82.46%, and 90.09% respectively, and the value of rms amounts to 0.395. So the CNN successfully reduces the dispersion and the catastrophic redshifts of quasars. This new method is very promising for the future of big databases such as the Large Synoptic Survey Telescope. A table of the candidates is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A97
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...
2018-02-06
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
Component Pin Recognition Using Algorithms Based on Machine Learning
NASA Astrophysics Data System (ADS)
Xiao, Yang; Hu, Hong; Liu, Ze; Xu, Jiangchang
2018-04-01
The purpose of machine vision for a plug-in machine is to improve the machine’s stability and accuracy, and recognition of the component pin is an important part of the vision. This paper focuses on component pin recognition using three different techniques. The first technique involves traditional image processing using the core algorithm for binary large object (BLOB) analysis. The second technique uses the histogram of oriented gradients (HOG), to experimentally compare the effect of the support vector machine (SVM) and the adaptive boosting machine (AdaBoost) learning meta-algorithm classifiers. The third technique is the use of an in-depth learning method known as convolution neural network (CNN), which involves identifying the pin by comparing a sample to its training. The main purpose of the research presented in this paper is to increase the knowledge of learning methods used in the plug-in machine industry in order to achieve better results.
Going Deeper With Contextual CNN for Hyperspectral Image Classification.
Lee, Hyungtae; Kwon, Heesung
2017-10-01
In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.
NASA Astrophysics Data System (ADS)
Govorov, Michael; Gienko, Gennady; Putrenko, Viktor
2018-05-01
In this paper, several supervised machine learning algorithms were explored to define homogeneous regions of con-centration of uranium in surface waters in Ukraine using multiple environmental parameters. The previous study was focused on finding the primary environmental parameters related to uranium in ground waters using several methods of spatial statistics and unsupervised classification. At this step, we refined the regionalization using Artifi-cial Neural Networks (ANN) techniques including Multilayer Perceptron (MLP), Radial Basis Function (RBF), and Convolutional Neural Network (CNN). The study is focused on building local ANN models which may significantly improve the prediction results of machine learning algorithms by taking into considerations non-stationarity and autocorrelation in spatial data.
Relative location prediction in CT scan images using convolutional neural networks.
Guo, Jiajia; Du, Hongwei; Zhu, Jianyue; Yan, Ting; Qiu, Bensheng
2018-07-01
Relative location prediction in computed tomography (CT) scan images is a challenging problem. Many traditional machine learning methods have been applied in attempts to alleviate this problem. However, the accuracy and speed of these methods cannot meet the requirement of medical scenario. In this paper, we propose a regression model based on one-dimensional convolutional neural networks (CNN) to determine the relative location of a CT scan image both quickly and precisely. In contrast to other common CNN models that use a two-dimensional image as an input, the input of this CNN model is a feature vector extracted by a shape context algorithm with spatial correlation. Normalization via z-score is first applied as a pre-processing step. Then, in order to prevent overfitting and improve model's performance, 20% of the elements of the feature vectors are randomly set to zero. This CNN model consists primarily of three one-dimensional convolutional layers, three dropout layers and two fully-connected layers with appropriate loss functions. A public dataset is employed to validate the performance of the proposed model using a 5-fold cross validation. Experimental results demonstrate an excellent performance of the proposed model when compared with contemporary techniques, achieving a median absolute error of 1.04 cm and mean absolute error of 1.69 cm. The time taken for each relative location prediction is approximately 2 ms. Results indicate that the proposed CNN method can contribute to a quick and accurate relative location prediction in CT scan images, which can improve efficiency of the medical picture archiving and communication system in the future. Copyright © 2018 Elsevier B.V. All rights reserved.
Opinion mining on book review using CNN-L2-SVM algorithm
NASA Astrophysics Data System (ADS)
Rozi, M. F.; Mukhlash, I.; Soetrisno; Kimura, M.
2018-03-01
Review of a product can represent quality of a product itself. An extraction to that review can be used to know sentiment of that opinion. Process to extract useful information of user review is called Opinion Mining. Review extraction model that is enhancing nowadays is Deep Learning model. This Model has been used by many researchers to obtain excellent performance on Natural Language Processing. In this research, one of deep learning model, Convolutional Neural Network (CNN) is used for feature extraction and L2 Support Vector Machine (SVM) as classifier. These methods are implemented to know the sentiment of book review data. The result of this method shows state-of-the art performance in 83.23% for training phase and 64.6% for testing phase.
Generating description with multi-feature fusion and saliency maps of image
NASA Astrophysics Data System (ADS)
Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo
2018-04-01
Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.
Applying machine-learning techniques to Twitter data for automatic hazard-event classification.
NASA Astrophysics Data System (ADS)
Filgueira, R.; Bee, E. J.; Diaz-Doce, D.; Poole, J., Sr.; Singh, A.
2017-12-01
The constant flow of information offered by tweets provides valuable information about all sorts of events at a high temporal and spatial resolution. Over the past year we have been analyzing in real-time geological hazards/phenomenon, such as earthquakes, volcanic eruptions, landslides, floods or the aurora, as part of the GeoSocial project, by geo-locating tweets filtered by keywords in a web-map. However, not all the filtered tweets are related with hazard/phenomenon events. This work explores two classification techniques for automatic hazard-event categorization based on tweets about the "Aurora". First, tweets were filtered using aurora-related keywords, removing stop words and selecting the ones written in English. For classifying the remaining between "aurora-event" or "no-aurora-event" categories, we compared two state-of-art techniques: Support Vector Machine (SVM) and Deep Convolutional Neural Networks (CNN) algorithms. Both approaches belong to the family of supervised learning algorithms, which make predictions based on labelled training dataset. Therefore, we created a training dataset by tagging 1200 tweets between both categories. The general form of SVM is used to separate two classes by a function (kernel). We compared the performance of four different kernels (Linear Regression, Logistic Regression, Multinomial Naïve Bayesian and Stochastic Gradient Descent) provided by Scikit-Learn library using our training dataset to build the SVM classifier. The results shown that the Logistic Regression (LR) gets the best accuracy (87%). So, we selected the SVM-LR classifier to categorise a large collection of tweets using the "dispel4py" framework.Later, we developed a CNN classifier, where the first layer embeds words into low-dimensional vectors. The next layer performs convolutions over the embedded word vectors. Results from the convolutional layer are max-pooled into a long feature vector, which is classified using a softmax layer. The CNN's accuracy is lower (83%) than the SVM-LR, since the algorithm needs a bigger training dataset to increase its accuracy. We used TensorFlow framework for applying CNN classifier to the same collection of tweets.In future we will modify both classifiers to work with other geo-hazards, use larger training datasets and apply them in real-time.
NASA Astrophysics Data System (ADS)
Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo
2017-03-01
Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.
AggNet: Deep Learning From Crowds for Mitosis Detection in Breast Cancer Histology Images.
Albarqouni, Shadi; Baur, Christoph; Achilles, Felix; Belagiannis, Vasileios; Demirci, Stefanie; Navab, Nassir
2016-05-01
The lack of publicly available ground-truth data has been identified as the major challenge for transferring recent developments in deep learning to the biomedical imaging domain. Though crowdsourcing has enabled annotation of large scale databases for real world images, its application for biomedical purposes requires a deeper understanding and hence, more precise definition of the actual annotation task. The fact that expert tasks are being outsourced to non-expert users may lead to noisy annotations introducing disagreement between users. Despite being a valuable resource for learning annotation models from crowdsourcing, conventional machine-learning methods may have difficulties dealing with noisy annotations during training. In this manuscript, we present a new concept for learning from crowds that handle data aggregation directly as part of the learning process of the convolutional neural network (CNN) via additional crowdsourcing layer (AggNet). Besides, we present an experimental study on learning from crowds designed to answer the following questions. 1) Can deep CNN be trained with data collected from crowdsourcing? 2) How to adapt the CNN to train on multiple types of annotation datasets (ground truth and crowd-based)? 3) How does the choice of annotation and aggregation affect the accuracy? Our experimental setup involved Annot8, a self-implemented web-platform based on Crowdflower API realizing image annotation tasks for a publicly available biomedical image database. Our results give valuable insights into the functionality of deep CNN learning from crowd annotations and prove the necessity of data aggregation integration.
NASA Astrophysics Data System (ADS)
Huertas-Company, M.; Primack, J. R.; Dekel, A.; Koo, D. C.; Lapiner, S.; Ceverino, D.; Simons, R. C.; Snyder, G. F.; Bernardi, M.; Chen, Z.; Domínguez-Sánchez, H.; Lee, C. T.; Margalef-Bentabol, B.; Tuccillo, D.
2018-05-01
We use machine learning to identify in color images of high-redshift galaxies an astrophysical phenomenon predicted by cosmological simulations. This phenomenon, called the blue nugget (BN) phase, is the compact star-forming phase in the central regions of many growing galaxies that follows an earlier phase of gas compaction and is followed by a central quenching phase. We train a convolutional neural network (CNN) with mock “observed” images of simulated galaxies at three phases of evolution— pre-BN, BN, and post-BN—and demonstrate that the CNN successfully retrieves the three phases in other simulated galaxies. We show that BNs are identified by the CNN within a time window of ∼0.15 Hubble times. When the trained CNN is applied to observed galaxies from the CANDELS survey at z = 1–3, it successfully identifies galaxies at the three phases. We find that the observed BNs are preferentially found in galaxies at a characteristic stellar mass range, 109.2–10.3 M ⊙ at all redshifts. This is consistent with the characteristic galaxy mass for BNs as detected in the simulations and is meaningful because it is revealed in the observations when the direct information concerning the total galaxy luminosity has been eliminated from the training set. This technique can be applied to the classification of other astrophysical phenomena for improved comparison of theory and observations in the era of large imaging surveys and cosmological simulations.
Cellular neural network-based hybrid approach toward automatic image registration
NASA Astrophysics Data System (ADS)
Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar
2013-01-01
Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.
Kim, Han Byul; Lee, Woong Woo; Kim, Aryun; Lee, Hong Ji; Park, Hye Young; Jeon, Hyo Seon; Kim, Sang Kyong; Jeon, Beomseok; Park, Kwang S
2018-04-01
Tremor is a commonly observed symptom in patients of Parkinson's disease (PD), and accurate measurement of tremor severity is essential in prescribing appropriate treatment to relieve its symptoms. We propose a tremor assessment system based on the use of a convolutional neural network (CNN) to differentiate the severity of symptoms as measured in data collected from a wearable device. Tremor signals were recorded from 92 PD patients using a custom-developed device (SNUMAP) equipped with an accelerometer and gyroscope mounted on a wrist module. Neurologists assessed the tremor symptoms on the Unified Parkinson's Disease Rating Scale (UPDRS) from simultaneously recorded video footages. The measured data were transformed into the frequency domain and used to construct a two-dimensional image for training the network, and the CNN model was trained by convolving tremor signal images with kernels. The proposed CNN architecture was compared to previously studied machine learning algorithms and found to outperform them (accuracy = 0.85, linear weighted kappa = 0.85). More precise monitoring of PD tremor symptoms in daily life could be possible using our proposed method. Copyright © 2018 Elsevier Ltd. All rights reserved.
EEG-Based Detection of Braking Intention Under Different Car Driving Conditions
Hernández, Luis G.; Mozos, Oscar Martinez; Ferrández, José M.; Antelis, Javier M.
2018-01-01
The anticipatory recognition of braking is essential to prevent traffic accidents. For instance, driving assistance systems can be useful to properly respond to emergency braking situations. Moreover, the response time to emergency braking situations can be affected and even increased by different driver's cognitive states caused by stress, fatigue, and extra workload. This work investigates the detection of emergency braking from driver's electroencephalographic (EEG) signals that precede the brake pedal actuation. Bioelectrical signals were recorded while participants were driving in a car simulator while avoiding potential collisions by performing emergency braking. In addition, participants were subjected to stress, workload, and fatigue. EEG signals were classified using support vector machines (SVM) and convolutional neural networks (CNN) in order to discriminate between braking intention and normal driving. Results showed significant recognition of emergency braking intention which was on average 71.1% for SVM and 71.8% CNN. In addition, the classification accuracy for the best participant was 80.1 and 88.1% for SVM and CNN, respectively. These results show the feasibility of incorporating recognizable driver's bioelectrical responses into advanced driver-assistance systems to carry out early detection of emergency braking situations which could be useful to reduce car accidents. PMID:29910722
Convolutional neural network features based change detection in satellite images
NASA Astrophysics Data System (ADS)
Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong
2016-07-01
With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-03-01
With the advancement of VLSI technology nodes, light diffraction caused lithographic hotspots have become a serious problem affecting manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with extreme scaling of transistor feature size and more and more complicated layout patterns, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. In this paper, we present a deep convolutional neural network (CNN) targeting representative feature learning in lithography hotspot detection. We carefully analyze impact and effectiveness of different CNN hyper-parameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always minorities in VLSI mask design, the training data set is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from high false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply minority upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves highly comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
Hasan, Mehedi; Kotov, Alexander; Carcone, April; Dong, Ming; Naar, Sylvie; Hartlieb, Kathryn Brogan
2016-08-01
This study examines the effectiveness of state-of-the-art supervised machine learning methods in conjunction with different feature types for the task of automatic annotation of fragments of clinical text based on codebooks with a large number of categories. We used a collection of motivational interview transcripts consisting of 11,353 utterances, which were manually annotated by two human coders as the gold standard, and experimented with state-of-art classifiers, including Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), Random Forest (RF), AdaBoost, DiscLDA, Conditional Random Fields (CRF) and Convolutional Neural Network (CNN) in conjunction with lexical, contextual (label of the previous utterance) and semantic (distribution of words in the utterance across the Linguistic Inquiry and Word Count dictionaries) features. We found out that, when the number of classes is large, the performance of CNN and CRF is inferior to SVM. When only lexical features were used, interview transcripts were automatically annotated by SVM with the highest classification accuracy among all classifiers of 70.8%, 61% and 53.7% based on the codebooks consisting of 17, 20 and 41 codes, respectively. Using contextual and semantic features, as well as their combination, in addition to lexical ones, improved the accuracy of SVM for annotation of utterances in motivational interview transcripts with a codebook consisting of 17 classes to 71.5%, 74.2%, and 75.1%, respectively. Our results demonstrate the potential of using machine learning methods in conjunction with lexical, semantic and contextual features for automatic annotation of clinical interview transcripts with near-human accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
Single-trial EEG RSVP classification using convolutional neural networks
NASA Astrophysics Data System (ADS)
Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William
2016-05-01
Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.
Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adeli, Hojjat
2017-09-27
An encephalogram (EEG) is a commonly used ancillary test to aide in the diagnosis of epilepsy. The EEG signal contains information about the electrical activity of the brain. Traditionally, neurologists employ direct visual inspection to identify epileptiform abnormalities. This technique can be time-consuming, limited by technical artifact, provides variable results secondary to reader expertise level, and is limited in identifying abnormalities. Therefore, it is essential to develop a computer-aided diagnosis (CAD) system to automatically distinguish the class of these EEG signals using machine learning techniques. This is the first study to employ the convolutional neural network (CNN) for analysis of EEG signals. In this work, a 13-layer deep convolutional neural network (CNN) algorithm is implemented to detect normal, preictal, and seizure classes. The proposed technique achieved an accuracy, specificity, and sensitivity of 88.67%, 90.00% and 95.00%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deep hierarchical attention network for video description
NASA Astrophysics Data System (ADS)
Li, Shuohao; Tang, Min; Zhang, Jun
2018-03-01
Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.
1996-01-01
Automated Teller Machine networks malfunction in Georgia 2000 May 20 CNN off air for 12 minutes; issues special report 2000 May 20 worm...password combinations, social security and credit card numbers, account information, health status, and innumerable other sensitive information...as follows: TW/AA Issues Recommended Technical Response Possible Implementation Obstacles 1. (re Tactical Warning) • Place automated software
Generative Models for Similarity-based Classification
2007-01-01
NC), local nearest centroid (local NC), k-nearest neighbors ( kNN ), and condensed nearest neighbors (CNN) are all similarity-based classifiers which...vector machine to the k nearest neighbors of the test sample [80]. The SVM- KNN method was developed to address the robustness and dimensionality...concerns that afflict nearest neighbors and SVMs. Similarly to the nearest-means classifier, the SVM- KNN is a hybrid local and global classifier developed
NASA Astrophysics Data System (ADS)
Liu, Wanjun; Liang, Xuejian; Qu, Haicheng
2017-11-01
Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.
Komeda, Yoriaki; Handa, Hisashi; Watanabe, Tomohiro; Nomura, Takanobu; Kitahashi, Misaki; Sakurai, Toshiharu; Okamoto, Ayana; Minami, Tomohiro; Kono, Masashi; Arizumi, Tadaaki; Takenaka, Mamoru; Hagiwara, Satoru; Matsui, Shigenaga; Nishida, Naoshi; Kashida, Hiroshi; Kudo, Masatoshi
2017-01-01
Computer-aided diagnosis (CAD) is becoming a next-generation tool for the diagnosis of human disease. CAD for colon polyps has been suggested as a particularly useful tool for trainee colonoscopists, as the use of a CAD system avoids the complications associated with endoscopic resections. In addition to conventional CAD, a convolutional neural network (CNN) system utilizing artificial intelligence (AI) has been developing rapidly over the past 5 years. We attempted to generate a unique CNN-CAD system with an AI function that studied endoscopic images extracted from movies obtained with colonoscopes used in routine examinations. Here, we report our preliminary results of this novel CNN-CAD system for the diagnosis of colon polyps. A total of 1,200 images from cases of colonoscopy performed between January 2010 and December 2016 at Kindai University Hospital were used. These images were extracted from the video of actual endoscopic examinations. Additional video images from 10 cases of unlearned processes were retrospectively assessed in a pilot study. They were simply diagnosed as either an adenomatous or nonadenomatous polyp. The number of images used by AI to learn to distinguish adenomatous from nonadenomatous was 1,200:600. These images were extracted from the videos of actual endoscopic examinations. The size of each image was adjusted to 256 × 256 pixels. A 10-hold cross-validation was carried out. The accuracy of the 10-hold cross-validation is 0.751, where the accuracy is the ratio of the number of correct answers over the number of all the answers produced by the CNN. The decisions by the CNN were correct in 7 of 10 cases. A CNN-CAD system using routine colonoscopy might be useful for the rapid diagnosis of colorectal polyp classification. Further prospective studies in an in vivo setting are required to confirm the effectiveness of a CNN-CAD system in routine colonoscopy. © 2017 S. Karger AG, Basel.
Computer aided lung cancer diagnosis with deep learning algorithms
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Qian, Wei
2016-03-01
Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.
Phylogenetic convolutional neural networks in metagenomics.
Fioravanti, Diego; Giarratano, Ylenia; Maggio, Valerio; Agostinelli, Claudio; Chierici, Marco; Jurman, Giuseppe; Furlanello, Cesare
2018-03-08
Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user.
Chen, Zhao; Cao, Yanfeng; He, Shuaibing; Qiao, Yanjiang
2018-01-01
Action (" gongxiao " in Chinese) of traditional Chinese medicine (TCM) is the high recapitulation for therapeutic and health-preserving effects under the guidance of TCM theory. TCM-defined herbal properties (" yaoxing " in Chinese) had been used in this research. TCM herbal property (TCM-HP) is the high generalization and summary for actions, both of which come from long-term effective clinical practice in two thousands of years in China. However, the specific relationship between TCM-HP and action of TCM is complex and unclear from a scientific perspective. The research about this is conducive to expound the connotation of TCM-HP theory and is of important significance for the development of the TCM-HP theory. One hundred and thirty-three herbs including 88 heat-clearing herbs (HCHs) and 45 blood-activating stasis-resolving herbs (BAHRHs) were collected from reputable TCM literatures, and their corresponding TCM-HPs/actions information were collected from Chinese pharmacopoeia (2015 edition). The Kennard-Stone (K-S) algorithm was used to split 133 herbs into 100 calibration samples and 33 validation samples. Then, machine learning methods including supported vector machine (SVM), k-nearest neighbor (kNN) and deep learning methods including deep belief network (DBN), convolutional neutral network (CNN) were adopted to develop action classification models based on TCM-HP theory, respectively. In order to ensure robustness, these four classification methods were evaluated by using the method of tenfold cross validation and 20 external validation samples for prediction. As results, 72.7-100% of 33 validation samples including 17 HCHs and 16 BASRHs were correctly predicted by these four types of methods. Both of the DBN and CNN methods gave out the best results and their sensitivity, specificity, precision, accuracy were all 100.00%. Especially, the predicted results of external validation set showed that the performance of deep learning methods (DBN, CNN) were better than traditional machine learning methods (kNN, SVM) in terms of their sensitivity, specificity, precision, accuracy. Moreover, the distribution patterns of TCM-HPs of HCHs and BASRHs were also analyzed to detect the featured TCM-HPs of these two types of herbs. The result showed that the featured TCM-HPs of HCHs were cold, bitter, liver and stomach meridians entered, while those of BASRHs were warm, bitter and pungent, liver meridian entered. The performance on validation set and external validation set of deep learning methods (DBN, CNN) were better than machine learning models (kNN, SVM) in sensitivity, specificity, precision, accuracy when predicting the actions of heat-clearing and blood-activating stasis-resolving based on TCM-HP theory. The deep learning classification methods owned better generalization ability and accuracy when predicting the actions of heat-clearing and blood-activating stasis-resolving based on TCM-HP theory. Besides, the methods of deep learning would help us to improve our understanding about the relationship between herbal property and action, as well as to enrich and develop the theory of TCM-HP scientifically.
NASA Astrophysics Data System (ADS)
Alom, Md. Zahangir; Awwal, Abdul A. S.; Lowe-Webb, Roger; Taha, Tarek M.
2017-08-01
Deep-learning methods are gaining popularity because of their state-of-the-art performance in image classification tasks. In this paper, we explore classification of laser-beam images from the National Ignition Facility (NIF) using a novel deeplearning approach. NIF is the world's largest, most energetic laser. It has nearly 40,000 optics that precisely guide, reflect, amplify, and focus 192 laser beams onto a fusion target. NIF utilizes four petawatt lasers called the Advanced Radiographic Capability (ARC) to produce backlighting X-ray illumination to capture implosion dynamics of NIF experiments with picosecond temporal resolution. In the current operational configuration, four independent short-pulse ARC beams are created and combined in a split-beam configuration in each of two NIF apertures at the entry of the pre-amplifier. The subaperture beams then propagate through the NIF beampath up to the ARC compressor. Each ARC beamlet is separately compressed with a dedicated set of four gratings and recombined as sub-apertures for transport to the parabola vessel, where the beams are focused using parabolic mirrors and pointed to the target. Small angular errors in the compressor gratings can cause the sub-aperture beams to diverge from one another and prevent accurate alignment through the transport section between the compressor and parabolic mirrors. This is an off-normal condition that must be detected and corrected. The goal of the off-normal check is to determine whether the ARC beamlets are sufficiently overlapped into a merged single spot or diverged into two distinct spots. Thus, the objective of the current work is three-fold: developing a simple algorithm to perform off-normal classification, exploring the use of Convolutional Neural Network (CNN) for the same task, and understanding the inter-relationship of the two approaches. The CNN recognition results are compared with other machine-learning approaches, such as Deep Neural Network (DNN) and Support Vector Machine (SVM). The experimental results show around 96% classification accuracy using CNN; the CNN approach also provides comparable recognition results compared to the present feature-based off-normal detection. The feature-based solution was developed to capture the expertise of a human expert in classifying the images. The misclassified results are further studied to explain the differences and discover any discrepancies or inconsistencies in current classification.
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-07-01
With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
NASA Astrophysics Data System (ADS)
Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.
2017-03-01
Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is still required for evaluating the results.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera.
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-10-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-01-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031
Automated Analysis of ARM Binaries using the Low-Level Virtual Machine Compiler Framework
2011-03-01
president to insist on keeping his smartphone [CNN09]. A self-proclaimed BlackBerry addict , President Obama fought hard to keep his mobile device after his... smartphone but renders a device non-functional on installation [FSe09][Hof07]. Complex interactions between hardware and software components both within... smartphone (which is a big assumption), the phone may still be vulnerable if the hardware or software does not correctly implement the design
Exploring convolutional neural networks for drug–drug interaction extraction
Segura-Bedmar, Isabel; Martínez, Paloma
2017-01-01
Abstract Drug–drug interaction (DDI), which is a specific type of adverse drug reaction, occurs when a drug influences the level or activity of another drug. Natural language processing techniques can provide health-care professionals with a novel way of reducing the time spent reviewing the literature for potential DDIs. The current state-of-the-art for the extraction of DDIs is based on feature-engineering algorithms (such as support vector machines), which usually require considerable time and effort. One possible alternative to these approaches includes deep learning. This technique aims to automatically learn the best feature representation from the input data for a given task. The purpose of this paper is to examine whether a convolutional neural network (CNN), which only uses word embeddings as input features, can be applied successfully to classify DDIs from biomedical texts. Proposed herein, is a CNN architecture with only one hidden layer, thus making the model more computationally efficient, and we perform detailed experiments in order to determine the best settings of the model. The goal is to determine the best parameter of this basic CNN that should be considered for future research. The experimental results show that the proposed approach is promising because it attained the second position in the 2013 rankings of the DDI extraction challenge. However, it obtained worse results than previous works using neural networks with more complex architectures. PMID:28605776
Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L
2016-07-01
Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text
Toolkits and Libraries for Deep Learning.
Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth
2017-08-01
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.
[Computer aided diagnosis model for lung tumor based on ensemble convolutional neural network].
Wang, Yuanyuan; Zhou, Tao; Lu, Huiling; Wu, Cuiying; Yang, Pengfei
2017-08-01
The convolutional neural network (CNN) could be used on computer-aided diagnosis of lung tumor with positron emission tomography (PET)/computed tomography (CT), which can provide accurate quantitative analysis to compensate for visual inertia and defects in gray-scale sensitivity, and help doctors diagnose accurately. Firstly, parameter migration method is used to build three CNNs (CT-CNN, PET-CNN, and PET/CT-CNN) for lung tumor recognition in CT, PET, and PET/CT image, respectively. Then, we aimed at CT-CNN to obtain the appropriate model parameters for CNN training through analysis the influence of model parameters such as epochs, batchsize and image scale on recognition rate and training time. Finally, three single CNNs are used to construct ensemble CNN, and then lung tumor PET/CT recognition was completed through relative majority vote method and the performance between ensemble CNN and single CNN was compared. The experiment results show that the ensemble CNN is better than single CNN on computer-aided diagnosis of lung tumor.
CTC Sentinel. Volume 9, Issue 4. April 2016
2016-09-21
stakes are high. Since the Islamic State, unlike al-Qa`ida and its various regional affiliates, places such great em - phasis on its image as state...not going to bomb all the banks in Mosul or starve the economy of millions of people. There are material constraints to what we can do while ISIS...close to the investigation told CNN that the laptop bomb was sophisticated and went undetected by an airport X-ray machine.c In a statement released on
Eisman, Robert C.; Phelps, Melissa A. S.; Kaufman, Thomas
2015-01-01
The formation of the pericentriolar matrix (PCM) and a fully functional centrosome in syncytial Drosophila melanogaster embryos requires the rapid transport of Cnn during initiation of the centrosome replication cycle. We show a Cnn and Polo kinase interaction is apparently required during embryogenesis and involves the exon 1A-initiating coding exon, suggesting a subset of Cnn splice variants is regulated by Polo kinase. During PCM formation exon 1A Cnn-Long Form proteins likely bind Polo kinase before phosphorylation by Polo for Cnn transport to the centrosome. Loss of either of these interactions in a portion of the total Cnn protein pool is sufficient to remove native Cnn from the pool, thereby altering the normal localization dynamics of Cnn to the PCM. Additionally, Cnn-Short Form proteins are required for polar body formation, a process known to require Polo kinase after the completion of meiosis. Exon 1A Cnn-LF and Cnn-SF proteins, in conjunction with Polo kinase, are required at the completion of meiosis and for the formation of functional centrosomes during early embryogenesis. PMID:26447129
Eisman, Robert C; Phelps, Melissa A S; Kaufman, Thomas
2015-10-01
The formation of the pericentriolar matrix (PCM) and a fully functional centrosome in syncytial Drosophila melanogaster embryos requires the rapid transport of Cnn during initiation of the centrosome replication cycle. We show a Cnn and Polo kinase interaction is apparently required during embryogenesis and involves the exon 1A-initiating coding exon, suggesting a subset of Cnn splice variants is regulated by Polo kinase. During PCM formation exon 1A Cnn-Long Form proteins likely bind Polo kinase before phosphorylation by Polo for Cnn transport to the centrosome. Loss of either of these interactions in a portion of the total Cnn protein pool is sufficient to remove native Cnn from the pool, thereby altering the normal localization dynamics of Cnn to the PCM. Additionally, Cnn-Short Form proteins are required for polar body formation, a process known to require Polo kinase after the completion of meiosis. Exon 1A Cnn-LF and Cnn-SF proteins, in conjunction with Polo kinase, are required at the completion of meiosis and for the formation of functional centrosomes during early embryogenesis. Copyright © 2015 by the Genetics Society of America.
Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-08-04
Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. ©Jose Juan Dominguez Veiga, Martin O'Reilly, Darragh Whelan, Brian Caulfield, Tomas E Ward. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.08.2017.
O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-01-01
Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. Results With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. Conclusions The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. PMID:28778851
The end of a monolith: Deconstructing the Cnn-Polo interaction.
Eisman, Robert C; Phelps, Melissa A S; Kaufman, Thomas C
2016-04-02
In Drosophila melanogaster a functional pericentriolar matrix (PCM) at mitotic centrosomes requires Centrosomin-Long Form (Cnn-LF) proteins. Moreover, tissue culture cells have shown that the centrosomal localization of both Cnn-LF and Polo kinase are co-dependent, suggesting a direct interaction. Our recent study found Cnn potentially binds to and is phosphorylated by Polo kinase at 2 residues encoded by Exon1A, the initiating exon of a subset of Cnn isoforms. These interactions are required for the centrosomal localization of Cnn-LF in syncytial embryos and a mutation of either phosphorylation site is sufficient to block localization of both mutant and wild-type Cnn when they are co-expressed. Immunoprecipitation experiments show that Cnn-LF interacts directly with mitotically activated Polo kinase and requires the 2 phosphorylation sites in Exon1A. These IP experiments also show that Cnn-LF proteins form multimers. Depending on the stoichiometry between functional and mutant peptides, heteromultimers exhibit dominant negative or positive trans-complementation (rescue) effects on mitosis. Additionally, following the completion of meiosis, Cnn-Short Form (Cnn-SF) proteins are required for polar body formation in embryos, a process previously shown to require Polo kinase. These findings, when combined with previous work, clearly demonstrate the complexity of cnn and show that a view of cnn as encoding a single peptide is too simplistic.
The end of a monolith: Deconstructing the Cnn-Polo interaction
2016-01-01
ABSTRACT In Drosophila melanogaster a functional pericentriolar matrix (PCM) at mitotic centrosomes requires Centrosomin-Long Form (Cnn-LF) proteins. Moreover, tissue culture cells have shown that the centrosomal localization of both Cnn-LF and Polo kinase are co-dependent, suggesting a direct interaction. Our recent study found Cnn potentially binds to and is phosphorylated by Polo kinase at 2 residues encoded by Exon1A, the initiating exon of a subset of Cnn isoforms. These interactions are required for the centrosomal localization of Cnn-LF in syncytial embryos and a mutation of either phosphorylation site is sufficient to block localization of both mutant and wild-type Cnn when they are co-expressed. Immunoprecipitation experiments show that Cnn-LF interacts directly with mitotically activated Polo kinase and requires the 2 phosphorylation sites in Exon1A. These IP experiments also show that Cnn-LF proteins form multimers. Depending on the stoichiometry between functional and mutant peptides, heteromultimers exhibit dominant negative or positive trans-complementation (rescue) effects on mitosis. Additionally, following the completion of meiosis, Cnn-Short Form (Cnn-SF) proteins are required for polar body formation in embryos, a process previously shown to require Polo kinase. These findings, when combined with previous work, clearly demonstrate the complexity of cnn and show that a view of cnn as encoding a single peptide is too simplistic. PMID:27096551
Thapa, Kriti Shrestha; Oldani, Amanda; Pagliuca, Cinzia; De Wulf, Peter; Hazbun, Tony R
2015-05-01
Kinetochores are conserved protein complexes that bind the replicated chromosomes to the mitotic spindle and then direct their segregation. To better comprehend Saccharomyces cerevisiae kinetochore function, we dissected the phospho-regulated dynamic interaction between conserved kinetochore protein Cnn1(CENP-T), the centromere region, and the Ndc80 complex through the cell cycle. Cnn1 localizes to kinetochores at basal levels from G1 through metaphase but accumulates abruptly at anaphase onset. How Cnn1 is recruited and which activities regulate its dynamic localization are unclear. We show that Cnn1 harbors two kinetochore-localization activities: a C-terminal histone-fold domain (HFD) that associates with the centromere region and a N-terminal Spc24/Spc25 interaction sequence that mediates linkage to the microtubule-binding Ndc80 complex. We demonstrate that the established Ndc80 binding site in the N terminus of Cnn1, Cnn1(60-84), should be extended with flanking residues, Cnn1(25-91), to allow near maximal binding affinity to Ndc80. Cnn1 localization was proposed to depend on Mps1 kinase activity at Cnn1-S74, based on in vitro experiments demonstrating the Cnn1-Ndc80 complex interaction. We demonstrate that from G1 through metaphase, Cnn1 localizes via both its HFD and N-terminal Spc24/Spc25 interaction sequence, and deletion or mutation of either region results in anomalous Cnn1 kinetochore levels. At anaphase onset (when Mps1 activity decreases) Cnn1 becomes enriched mainly via the N-terminal Spc24/Spc25 interaction sequence. In sum, we provide the first in vivo evidence of Cnn1 preanaphase linkages with the kinetochore and enrichment of the linkages during anaphase. Copyright © 2015 by the Genetics Society of America.
Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision.
Wen, Haiguang; Shi, Junxing; Zhang, Yizhen; Lu, Kun-Han; Cao, Jiayue; Liu, Zhongming
2017-10-20
Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Similarity estimation for reference image retrieval in mammograms using convolutional neural network
NASA Astrophysics Data System (ADS)
Muramatsu, Chisako; Higuchi, Shunichi; Morita, Takako; Oiwa, Mikinao; Fujita, Hiroshi
2018-02-01
Periodic breast cancer screening with mammography is considered effective in decreasing breast cancer mortality. For screening programs to be successful, an intelligent image analytic system may support radiologists' efficient image interpretation. In our previous studies, we have investigated image retrieval schemes for diagnostic references of breast lesions on mammograms and ultrasound images. Using a machine learning method, reliable similarity measures that agree with radiologists' similarity were determined and relevant images could be retrieved. However, our previous method includes a feature extraction step, in which hand crafted features were determined based on manual outlines of the masses. Obtaining the manual outlines of masses is not practical in clinical practice and such data would be operator-dependent. In this study, we investigated a similarity estimation scheme using a convolutional neural network (CNN) to skip such procedure and to determine data-driven similarity scores. By using CNN as feature extractor, in which extracted features were employed in determination of similarity measures with a conventional 3-layered neural network, the determined similarity measures were correlated well with the subjective ratings and the precision of retrieving diagnostically relevant images was comparable with that of the conventional method using handcrafted features. By using CNN for determination of similarity measure directly, the result was also comparable. By optimizing the network parameters, results may be further improved. The proposed method has a potential usefulness in determination of similarity measure without precise lesion outlines for retrieval of similar mass images on mammograms.
A hybrid deep learning approach to predict malignancy of breast lesions using mammograms
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Heidari, Morteza; Mirniaharikandehei, Seyedehnafiseh; Gong, Jing; Qian, Wei; Qiu, Yuchen; Zheng, Bin
2018-03-01
Applying deep learning technology to medical imaging informatics field has been recently attracting extensive research interest. However, the limited medical image dataset size often reduces performance and robustness of the deep learning based computer-aided detection and/or diagnosis (CAD) schemes. In attempt to address this technical challenge, this study aims to develop and evaluate a new hybrid deep learning based CAD approach to predict likelihood of a breast lesion detected on mammogram being malignant. In this approach, a deep Convolutional Neural Network (CNN) was firstly pre-trained using the ImageNet dataset and serve as a feature extractor. A pseudo-color Region of Interest (ROI) method was used to generate ROIs with RGB channels from the mammographic images as the input to the pre-trained deep network. The transferred CNN features from different layers of the CNN were then obtained and a linear support vector machine (SVM) was trained for the prediction task. By applying to a dataset involving 301 suspicious breast lesions and using a leave-one-case-out validation method, the areas under the ROC curves (AUC) = 0.762 and 0.792 using the traditional CAD scheme and the proposed deep learning based CAD scheme, respectively. An ensemble classifier that combines the classification scores generated by the two schemes yielded an improved AUC value of 0.813. The study results demonstrated feasibility and potentially improved performance of applying a new hybrid deep learning approach to develop CAD scheme using a relatively small dataset of medical images.
Zhang, Kai; Long, Erping; Cui, Jiangtao; Zhu, Mingmin; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni
2017-01-01
Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN). First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets. These datasets are fed into the CNN to extract high-level features and implement automatic classification and grading. To demonstrate the performance and effectiveness of the deep features extracted in the CNN, we investigate the features combined with support vector machine (SVM) and softmax classifier and compare these with the traditional representative methods. The qualitative and quantitative experimental results demonstrate that our proposed method offers exceptional mean accuracy, sensitivity and specificity: classification (97.07%, 97.28%, and 96.83%) and a three-degree grading area (89.02%, 86.63%, and 90.75%), density (92.68%, 91.05%, and 93.94%) and location (89.28%, 82.70%, and 93.08%). Finally, we developed and deployed a potential automatic diagnostic software for ophthalmologists and patients in clinical applications to implement the validated model. PMID:28306716
Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil
2018-01-01
With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.
Using convolutional neural networks to estimate time-of-flight from PET detector waveforms
NASA Astrophysics Data System (ADS)
Berg, Eric; Cherry, Simon R.
2018-01-01
Although there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time-of-flight information from the detector signals. In most cases, the timing pick-off for each waveform is computed using leading edge discrimination or constant fraction discrimination, as these were historically easily implemented with analog pulse processing electronics. However, now with the availability of fast waveform digitizers, there is opportunity to make use of more of the timing information contained in the coincident detector waveforms with advanced signal processing techniques. Here we describe the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate time-of-flight directly from the pair of digitized detector waveforms for a coincident event. One of the key features of this approach is the simplicity in obtaining ground-truth-labeled data needed to train the CNN: the true time-of-flight is determined from the difference in path length between the positron emission and each of the coincident detectors, which can be easily controlled experimentally. The experimental setup used here made use of two photomultiplier tube-based scintillation detectors, and a point source, stepped in 5 mm increments over a 15 cm range between the two detectors. The detector waveforms were digitized at 10 GS s-1 using a bench-top oscilloscope. The results shown here demonstrate that CNN-based time-of-flight estimation improves timing resolution by 20% compared to leading edge discrimination (231 ps versus 185 ps), and 23% compared to constant fraction discrimination (242 ps versus 185 ps). By comparing several different CNN architectures, we also showed that CNN depth (number of convolutional and fully connected layers) had the largest impact on timing resolution, while the exact network parameters, such as convolutional filter size and number of feature maps, had only a minor influence.
Turkki, Riku; Linder, Nina; Kovanen, Panu E; Pellinen, Teijo; Lundin, Johan
2016-01-01
Immune cell infiltration in tumor is an emerging prognostic biomarker in breast cancer. The gold standard for quantification of immune cells in tissue sections is visual assessment through a microscope, which is subjective and semi-quantitative. In this study, we propose and evaluate an approach based on antibody-guided annotation and deep learning to quantify immune cell-rich areas in hematoxylin and eosin (H&E) stained samples. Consecutive sections of formalin-fixed parafin-embedded samples obtained from the primary tumor of twenty breast cancer patients were cut and stained with H&E and the pan-leukocyte CD45 antibody. The stained slides were digitally scanned, and a training set of immune cell-rich and cell-poor tissue regions was annotated in H&E whole-slide images using the CD45-expression as a guide. In analysis, the images were divided into small homogenous regions, superpixels, from which features were extracted using a pretrained convolutional neural network (CNN) and classified with a support of vector machine. The CNN approach was compared to texture-based classification and to visual assessments performed by two pathologists. In a set of 123,442 labeled superpixels, the CNN approach achieved an F-score of 0.94 (range: 0.92-0.94) in discrimination of immune cell-rich and cell-poor regions, as compared to an F-score of 0.88 (range: 0.87-0.89) obtained with the texture-based classification. When compared to visual assessment of 200 images, an agreement of 90% (κ = 0.79) to quantify immune infiltration with the CNN approach was achieved while the inter-observer agreement between pathologists was 90% (κ = 0.78). Our findings indicate that deep learning can be applied to quantify immune cell infiltration in breast cancer samples using a basic morphology staining only. A good discrimination of immune cell-rich areas was achieved, well in concordance with both leukocyte antigen expression and pathologists' visual assessment.
Wang, Zhaodi; Hu, Menghan; Zhai, Guangtao
2018-04-07
Deep learning has become a widely used powerful tool in many research fields, although not much so yet in agriculture technologies. In this work, two deep convolutional neural networks (CNN), viz. Residual Network (ResNet) and its improved version named ResNeXt, are used to detect internal mechanical damage of blueberries using hyperspectral transmittance data. The original structure and size of hypercubes are adapted for the deep CNN training. To ensure that the models are applicable to hypercube, we adjust the number of filters in the convolutional layers. Moreover, a total of 5 traditional machine learning algorithms, viz. Sequential Minimal Optimization (SMO), Linear Regression (LR), Random Forest (RF), Bagging and Multilayer Perceptron (MLP), are performed as the comparison experiments. In terms of model assessment, k-fold cross validation is used to indicate that the model performance does not vary with the different combination of dataset. In real-world application, selling damaged berries will lead to greater interest loss than discarding the sound ones. Thus, precision, recall, and F1-score are also used as the evaluation indicators alongside accuracy to quantify the false positive rate. The first three indicators are seldom used by investigators in the agricultural engineering domain. Furthermore, ROC curves and Precision-Recall curves are plotted to visualize the performance of classifiers. The fine-tuned ResNet/ResNeXt achieve average accuracy and F1-score of 0.8844/0.8784 and 0.8952/0.8905, respectively. Classifiers SMO/ LR/RF/Bagging/MLP obtain average accuracy and F1-score of 0.8082/0.7606/0.7314/0.7113/0.7827 and 0.8268/0.7796/0.7529/0.7339/0.7971, respectively. Two deep learning models achieve better classification performance than the traditional machine learning methods. Classification for each testing sample only takes 5.2 ms and 6.5 ms respectively for ResNet and ResNeXt, indicating that the deep learning framework has great potential for online fruit sorting. The results of this study demonstrate the potential of deep CNN application on analyzing the internal mechanical damage of fruit.
Hu, Menghan; Zhai, Guangtao
2018-01-01
Deep learning has become a widely used powerful tool in many research fields, although not much so yet in agriculture technologies. In this work, two deep convolutional neural networks (CNN), viz. Residual Network (ResNet) and its improved version named ResNeXt, are used to detect internal mechanical damage of blueberries using hyperspectral transmittance data. The original structure and size of hypercubes are adapted for the deep CNN training. To ensure that the models are applicable to hypercube, we adjust the number of filters in the convolutional layers. Moreover, a total of 5 traditional machine learning algorithms, viz. Sequential Minimal Optimization (SMO), Linear Regression (LR), Random Forest (RF), Bagging and Multilayer Perceptron (MLP), are performed as the comparison experiments. In terms of model assessment, k-fold cross validation is used to indicate that the model performance does not vary with the different combination of dataset. In real-world application, selling damaged berries will lead to greater interest loss than discarding the sound ones. Thus, precision, recall, and F1-score are also used as the evaluation indicators alongside accuracy to quantify the false positive rate. The first three indicators are seldom used by investigators in the agricultural engineering domain. Furthermore, ROC curves and Precision-Recall curves are plotted to visualize the performance of classifiers. The fine-tuned ResNet/ResNeXt achieve average accuracy and F1-score of 0.8844/0.8784 and 0.8952/0.8905, respectively. Classifiers SMO/ LR/RF/Bagging/MLP obtain average accuracy and F1-score of 0.8082/0.7606/0.7314/0.7113/0.7827 and 0.8268/0.7796/0.7529/0.7339/0.7971, respectively. Two deep learning models achieve better classification performance than the traditional machine learning methods. Classification for each testing sample only takes 5.2 ms and 6.5 ms respectively for ResNet and ResNeXt, indicating that the deep learning framework has great potential for online fruit sorting. The results of this study demonstrate the potential of deep CNN application on analyzing the internal mechanical damage of fruit. PMID:29642454
Overview of deep learning in medical imaging.
Suzuki, Kenji
2017-09-01
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
How Are Television Networks Involved in Distance Learning?
ERIC Educational Resources Information Center
Bucher, Katherine
1996-01-01
Reviews the involvement of various television networks in distance learning, including public broadcasting stations, Cable in the Classroom, Arts and Entertainment Network, Black Entertainment Television, C-SPAN, CNN (Cable News Network), The Discovery Channel, The Learning Channel, Mind Extension University, The Weather Channel, National Teacher…
2008-02-01
It is important that a CNN understands their employer's local policies, procedures and clinical protocols. To enable a CNN to practise, they must be appropriately trained, have clinical supervision and work in partnership with others. A CNN must maintain client confidentiality, and act accordingly with all partnership communications. A CNN has a duty of care to themselves, the clients, colleagues and the employer.
Cardiac Arrhythmia Classification by Multi-Layer Perceptron and Convolution Neural Networks.
Savalia, Shalin; Emamian, Vahid
2018-05-04
The electrocardiogram (ECG) plays an imperative role in the medical field, as it records heart signal over time and is used to discover numerous cardiovascular diseases. If a documented ECG signal has a certain irregularity in its predefined features, this is called arrhythmia, the types of which include tachycardia, bradycardia, supraventricular arrhythmias, and ventricular, etc. This has encouraged us to do research that consists of distinguishing between several arrhythmias by using deep neural network algorithms such as multi-layer perceptron (MLP) and convolution neural network (CNN). The TensorFlow library that was established by Google for deep learning and machine learning is used in python to acquire the algorithms proposed here. The ECG databases accessible at PhysioBank.com and kaggle.com were used for training, testing, and validation of the MLP and CNN algorithms. The proposed algorithm consists of four hidden layers with weights, biases in MLP, and four-layer convolution neural networks which map ECG samples to the different classes of arrhythmia. The accuracy of the algorithm surpasses the performance of the current algorithms that have been developed by other cardiologists in both sensitivity and precision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivas, Nisha; Rose, Derek C; Bolme, David S
This paper examines the difficulty associated with performing machine-based automatic demographic prediction on a sub-population of Asian faces. We introduce the Wild East Asian Face dataset (WEAFD), a new and unique dataset to the research community. This dataset consists primarily of labeled face images of individuals from East Asian countries, including Vietnam, Burma, Thailand, China, Korea, Japan, Indonesia, and Malaysia. East Asian turk annotators were uniquely used to judge the age and fine grain ethnicity attributes to reduce the impact of the other race effect and improve quality of annotations. We focus on predicting age, gender and fine-grained ethnicity ofmore » an individual by providing baseline results with a convolutional neural network (CNN). Finegrained ethnicity prediction refers to predicting ethnicity of an individual by country or sub-region (Chinese, Japanese, Korean, etc.) of the East Asian continent. Performance for two CNN architectures is presented, highlighting the difficulty of these tasks and showcasing potential design considerations that ease network optimization by promoting region based feature extraction.« less
Ghafoorian, Mohsen; Karssemeijer, Nico; Heskes, Tom; van Uden, Inge W M; Sanchez, Clara I; Litjens, Geert; de Leeuw, Frank-Erik; van Ginneken, Bram; Marchiori, Elena; Platel, Bram
2017-07-11
The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).
Audio-based deep music emotion recognition
NASA Astrophysics Data System (ADS)
Liu, Tong; Han, Li; Ma, Liangkai; Guo, Dongwei
2018-05-01
As the rapid development of multimedia networking, more and more songs are issued through the Internet and stored in large digital music libraries. However, music information retrieval on these libraries can be really hard, and the recognition of musical emotion is especially challenging. In this paper, we report a strategy to recognize the emotion contained in songs by classifying their spectrograms, which contain both the time and frequency information, with a convolutional neural network (CNN). The experiments conducted on the l000-song dataset indicate that the proposed model outperforms traditional machine learning method.
Understanding of Object Detection Based on CNN Family and YOLO
NASA Astrophysics Data System (ADS)
Du, Juan
2018-04-01
As a key use of image processing, object detection has boomed along with the unprecedented advancement of Convolutional Neural Network (CNN) and its variants since 2012. When CNN series develops to Faster Region with CNN (R-CNN), the Mean Average Precision (mAP) has reached 76.4, whereas, the Frame Per Second (FPS) of Faster R-CNN remains 5 to 18 which is far slower than the real-time effect. Thus, the most urgent requirement of object detection improvement is to accelerate the speed. Based on the general introduction to the background and the core solution CNN, this paper exhibits one of the best CNN representatives You Only Look Once (YOLO), which breaks through the CNN family’s tradition and innovates a complete new way of solving the object detection with most simple and high efficient way. Its fastest speed has achieved the exciting unparalleled result with FPS 155, and its mAP can also reach up to 78.6, both of which have surpassed the performance of Faster R-CNN greatly. Additionally, compared with the latest most advanced solution, YOLOv2 achieves an excellent tradeoff between speed and accuracy as well as an object detector with strong generalization ability to represent the whole image.
Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks
Xu, Xin; Gui, Rong; Pu, Fangling
2018-01-01
Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods. PMID:29510499
Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks.
Wang, Lei; Xu, Xin; Dong, Hao; Gui, Rong; Pu, Fangling
2018-03-03
Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods.
Image processing for a tactile/vision substitution system using digital CNN.
Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng
2006-01-01
In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people.
Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631
Global detection of live virtual machine migration based on cellular neural networks.
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang
2017-12-01
Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.
"What is relevant in a text document?": An interpretable machine learning approach
Arras, Leila; Horn, Franziska; Montavon, Grégoire; Müller, Klaus-Robert
2017-01-01
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. PMID:28800619
Conduit, Paul T.; Feng, Zhe; Richens, Jennifer H.; Baumbach, Janina; Wainman, Alan; Bakshi, Suruchi D.; Dobbelaere, Jeroen; Johnson, Steven; Lea, Susan M.; Raff, Jordan W.
2014-01-01
Summary Centrosomes are important cell organizers. They consist of a pair of centrioles surrounded by pericentriolar material (PCM) that expands dramatically during mitosis—a process termed centrosome maturation. How centrosomes mature remains mysterious. Here, we identify a domain in Drosophila Cnn that appears to be phosphorylated by Polo/Plk1 specifically at centrosomes during mitosis. The phosphorylation promotes the assembly of a Cnn scaffold around the centrioles that is in constant flux, with Cnn molecules recruited continuously around the centrioles as the scaffold spreads slowly outward. Mutations that block Cnn phosphorylation strongly inhibit scaffold assembly and centrosome maturation, whereas phosphomimicking mutations allow Cnn to multimerize in vitro and to spontaneously form cytoplasmic scaffolds in vivo that organize microtubules independently of centrosomes. We conclude that Polo/Plk1 initiates the phosphorylation-dependent assembly of a Cnn scaffold around centrioles that is essential for efficient centrosome maturation in flies. PMID:24656740
Conduit, Paul T; Feng, Zhe; Richens, Jennifer H; Baumbach, Janina; Wainman, Alan; Bakshi, Suruchi D; Dobbelaere, Jeroen; Johnson, Steven; Lea, Susan M; Raff, Jordan W
2014-03-31
Centrosomes are important cell organizers. They consist of a pair of centrioles surrounded by pericentriolar material (PCM) that expands dramatically during mitosis-a process termed centrosome maturation. How centrosomes mature remains mysterious. Here, we identify a domain in Drosophila Cnn that appears to be phosphorylated by Polo/Plk1 specifically at centrosomes during mitosis. The phosphorylation promotes the assembly of a Cnn scaffold around the centrioles that is in constant flux, with Cnn molecules recruited continuously around the centrioles as the scaffold spreads slowly outward. Mutations that block Cnn phosphorylation strongly inhibit scaffold assembly and centrosome maturation, whereas phosphomimicking mutations allow Cnn to multimerize in vitro and to spontaneously form cytoplasmic scaffolds in vivo that organize microtubules independently of centrosomes. We conclude that Polo/Plk1 initiates the phosphorylation-dependent assembly of a Cnn scaffold around centrioles that is essential for efficient centrosome maturation in flies. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
A Novel Fault Diagnosis Method for Rotating Machinery Based on a Convolutional Neural Network
Yang, Tao; Gao, Wei
2018-01-01
Fault diagnosis is critical to ensure the safety and reliable operation of rotating machinery. Most methods used in fault diagnosis of rotating machinery extract a few feature values from vibration signals for fault diagnosis, which is a dimensionality reduction from the original signal and may omit some important fault messages in the original signal. Thus, a novel diagnosis method is proposed involving the use of a convolutional neural network (CNN) to directly classify the continuous wavelet transform scalogram (CWTS), which is a time-frequency domain transform of the original signal and can contain most of the information of the vibration signals. In this method, CWTS is formed by discomposing vibration signals of rotating machinery in different scales using wavelet transform. Then the CNN is trained to diagnose faults, with CWTS as the input. A series of experiments is conducted on the rotor experiment platform using this method. The results indicate that the proposed method can diagnose the faults accurately. To verify the universality of this method, the trained CNN was also used to perform fault diagnosis for another piece of rotor equipment, and a good result was achieved. PMID:29734704
A Novel Fault Diagnosis Method for Rotating Machinery Based on a Convolutional Neural Network.
Guo, Sheng; Yang, Tao; Gao, Wei; Zhang, Chen
2018-05-04
Fault diagnosis is critical to ensure the safety and reliable operation of rotating machinery. Most methods used in fault diagnosis of rotating machinery extract a few feature values from vibration signals for fault diagnosis, which is a dimensionality reduction from the original signal and may omit some important fault messages in the original signal. Thus, a novel diagnosis method is proposed involving the use of a convolutional neural network (CNN) to directly classify the continuous wavelet transform scalogram (CWTS), which is a time-frequency domain transform of the original signal and can contain most of the information of the vibration signals. In this method, CWTS is formed by discomposing vibration signals of rotating machinery in different scales using wavelet transform. Then the CNN is trained to diagnose faults, with CWTS as the input. A series of experiments is conducted on the rotor experiment platform using this method. The results indicate that the proposed method can diagnose the faults accurately. To verify the universality of this method, the trained CNN was also used to perform fault diagnosis for another piece of rotor equipment, and a good result was achieved.
Kunikata, Toshio; Matsumoto, Yohsuke; Hanaya, Toshiharu; Harashima, Akira; Nishimoto, Tomoyuki; Ushio, Shimpei
2017-01-01
Cyclic nigerosyl nigerose (CNN) is a cyclic tetrasaccharide that exhibits properties distinct from other conventional cyclodextrins. Herein, we demonstrate that treatment of B16 melanoma with CNN results in a dose-dependent decrease in melanin synthesis, even under conditions that stimulate melanin synthesis, without significant cytotoxity. The effects of CNN were prolonged for more than 27 days, and were gradually reversed following removal of CNN. Undigested CNN was found to accumulate within B16 cells at relatively high levels. Further, CNN showed a weak but significant direct inhibitory effect on the enzymatic activity of tyrosinase, suggesting one possible mechanism of hypopigmentation. While a slight reduction in tyrosinase expression was observed, tyrosinase expression was maintained at significant levels, processed into a mature form, and transported to late-stage melanosomes. Immunocytochemical analysis demonstrated that CNN treatment induced drastic morphological changes of Pmel17-positive and LAMP-1–positive organelles within B16 cells, suggesting that CNN is a potent organelle modulator. Colocalization of both tyrosinase-positive and LAMP-1–positive regions in CNN-treated cells indicated possible degradation of tyrosinase in LAMP-1–positive organelles; however, that possibility was ruled out by subsequent inhibition experiments. Taken together, this study opens a new paradigm of functional oligosaccharides, and offers CNN as a novel hypopigmenting molecule and organelle modulator. PMID:29045474
Nakamura, Shuji; Kunikata, Toshio; Matsumoto, Yohsuke; Hanaya, Toshiharu; Harashima, Akira; Nishimoto, Tomoyuki; Ushio, Shimpei
2017-01-01
Cyclic nigerosyl nigerose (CNN) is a cyclic tetrasaccharide that exhibits properties distinct from other conventional cyclodextrins. Herein, we demonstrate that treatment of B16 melanoma with CNN results in a dose-dependent decrease in melanin synthesis, even under conditions that stimulate melanin synthesis, without significant cytotoxity. The effects of CNN were prolonged for more than 27 days, and were gradually reversed following removal of CNN. Undigested CNN was found to accumulate within B16 cells at relatively high levels. Further, CNN showed a weak but significant direct inhibitory effect on the enzymatic activity of tyrosinase, suggesting one possible mechanism of hypopigmentation. While a slight reduction in tyrosinase expression was observed, tyrosinase expression was maintained at significant levels, processed into a mature form, and transported to late-stage melanosomes. Immunocytochemical analysis demonstrated that CNN treatment induced drastic morphological changes of Pmel17-positive and LAMP-1-positive organelles within B16 cells, suggesting that CNN is a potent organelle modulator. Colocalization of both tyrosinase-positive and LAMP-1-positive regions in CNN-treated cells indicated possible degradation of tyrosinase in LAMP-1-positive organelles; however, that possibility was ruled out by subsequent inhibition experiments. Taken together, this study opens a new paradigm of functional oligosaccharides, and offers CNN as a novel hypopigmenting molecule and organelle modulator.
Chen, Geng; Rogers, Alicia K.; League, Garrett P.; Nam, Sang-Chul
2011-01-01
Background Cell polarity genes including Crumbs (Crb) and Par complexes are essential for controlling photoreceptor morphogenesis. Among the Crb and Par complexes, Bazooka (Baz, Par-3 homolog) acts as a nodal component for other cell polarity proteins. Therefore, finding other genes interacting with Baz will help us to understand the cell polarity genes' role in photoreceptor morphogenesis. Methodology/Principal Findings Here, we have found a genetic interaction between baz and centrosomin (cnn). Cnn is a core protein for centrosome which is a major microtubule-organizing center. We analyzed the effect of the cnn mutation on developing eyes to determine its role in photoreceptor morphogenesis. We found that Cnn is dispensable for retinal differentiation in eye imaginal discs during the larval stage. However, photoreceptors deficient in Cnn display dramatic morphogenesis defects including the mislocalization of Crumbs (Crb) and Bazooka (Baz) during mid-stage pupal eye development, suggesting that Cnn is specifically required for photoreceptor morphogenesis during pupal eye development. This role of Cnn in apical domain modulation was further supported by Cnn's gain-of-function phenotype. Cnn overexpression in photoreceptors caused the expansion of the apical Crb membrane domain, Baz and adherens junctions (AJs). Conclusions/Significance These results strongly suggest that the interaction of Baz and Cnn is essential for apical domain and AJ modulation during photoreceptor morphogenesis, but not for the initial photoreceptor differentiation in the Drosophila photoreceptor. PMID:21253601
Shichijo, Satoki; Nomura, Shuhei; Aoyama, Kazuharu; Nishikawa, Yoshitaka; Miura, Motoi; Shinagawa, Takahide; Takiyama, Hirotoshi; Tanimoto, Tetsuya; Ishihara, Soichiro; Matsuo, Keigo; Tada, Tomohiro
2017-11-01
The role of artificial intelligence in the diagnosis of Helicobacter pylori gastritis based on endoscopic images has not been evaluated. We constructed a convolutional neural network (CNN), and evaluated its ability to diagnose H. pylori infection. A 22-layer, deep CNN was pre-trained and fine-tuned on a dataset of 32,208 images either positive or negative for H. pylori (first CNN). Another CNN was trained using images classified according to 8 anatomical locations (secondary CNN). A separate test data set (11,481 images from 397 patients) was evaluated by the CNN, and 23 endoscopists, independently. The sensitivity, specificity, accuracy, and diagnostic time were 81.9%, 83.4%, 83.1%, and 198s, respectively, for the first CNN, and 88.9%, 87.4%, 87.7%, and 194s, respectively, for the secondary CNN. These values for the 23 endoscopists were 79.0%, 83.2%, 82.4%, and 230±65min (85.2%, 89.3%, 88.6%, and 253±92min by 6 board-certified endoscopists), respectively. The secondary CNN had a significantly higher accuracy than endoscopists (by 5.3%; 95% CI, 0.3-10.2). H. pylori gastritis could be diagnosed based on endoscopic images using CNN with higher accuracy and in a considerably shorter time compared to manual diagnosis by endoscopists. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, C.; Hong, Y.
2017-12-01
Infrared (IR) information from Geostationary satellites can be used to retrieve precipitation at pretty high spatiotemporal resolutions. Traditional artificial intelligence (AI) methodologies, such as artificial neural networks (ANN), have been designed to build the relationship between near-surface precipitation and manually derived IR features in products including PERSIANN and PERSIANN-CCS. This study builds an automatic precipitation detection model based on IR data using Convolutional Neural Network (CNN) which is implemented by the newly developed deep learning framework, Caffe. The model judges whether there is rain or no rain at pixel level. Compared with traditional ANN methods, CNN can extract features inside the raw data automatically and thoroughly. In this study, IR data from GOES satellites and precipitation estimates from the next generation QPE (Q2) over the central United States are used as inputs and labels, respectively. The whole datasets during the study period (June to August in 2012) are randomly partitioned to three sub datasets (train, validation and test) to establish the model at the spatial resolution of 0.08°×0.08° and the temporal resolution of 1 hour. The experiments show great improvements of CNN in rain identification compared to the widely used IR-based precipitation product, i.e., PERSIANN-CCS. The overall gain in performance is about 30% for critical success index (CSI), 32% for probability of detection (POD) and 12% for false alarm ratio (FAR). Compared to other recent IR-based precipitation retrieval methods (e.g., PERSIANN-DL developed by University of California Irvine), our model is simpler with less parameters, but achieves equally or even better results. CNN has been applied in computer vision domain successfully, and our results prove the method is suitable for IR precipitation detection. Future studies can expand the application of CNN from precipitation occurrence decision to precipitation amount retrieval.
Zhang, Lu-Lu; Li, Jia-Xiang; Zhou, Guan-Qun; Tang, Ling-Long; Ma, Jun; Lin, Ai-Hua; Qi, Zhen-Yu; Sun, Ying
2017-01-01
Background: To analyze the prognostic value of cervical node necrosis (CNN) observed on pretreatment magnetic resonance imaging (MRI) in patients with nasopharyngeal carcinoma (NPC) treated with intensity-modulated radiotherapy (IMRT). Patients and Methods: The medical records of 1423 NPC patients with cervical node metastasis who underwent IMRT were retrospectively reviewed. Lymph nodes in the axial plane of pretreatment MRI were classified as follows: grade 0 CNN, no hypodense zones; grade 1 CNN, ≤33% areas showing hypodense zones; and grade 2, >33% areas showing hypodense zones. Results: CNN was detectable in 470/1423 (33%) patients. Of these 470 patients, 213 (15%) and 257 (18%) exhibited grade 1 and grade 2 CNN. The grade 0 and grade 1 CNN groups showed significant differences with regard to distant metastasis-free survival (DMFS), but not overall survival (OS), regional relapse-free survival (RRFS), local relapse-free survival (LRFS), and disease-free survival (DFS). Significant differences were observed among the grade 0 and grade 2 CNN groups with regard to OS, RRFS, LRFS, DMFS, and DFS. Moreover, OS, LRFS, RRFS, and DFS were significantly different between the grade 1 and grade 2 CNN groups, whereas DMFS showed no significant differences. Univariate and multivariate analyses revealed CNN on MRI as a significant negative prognostic factor for OS, LRFS, RRFS, DMFS, and DFS in NPC patients. Conclusions: NPC patients with CNN of different grades show various prognosis and failure patterns after IMRT. CNN on MRI can be adopted as a predictive factor for formulating individualized treatment plans for NPC patients.
CNN Newsroom Classroom Guides. July 1998.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
CNN Newsroom is a daily 15-minute commercial-free news program specifically produced for classroom use and provided free to participating schools. The daily CNN Newsroom broadcast is supported by a Daily Classroom Guide, written by professional educators. These classroom guides are designed to accompany CNN Newsroom broadcasts for a given month,…
CNN-based ranking for biomedical entity normalization.
Li, Haodi; Chen, Qingcai; Tang, Buzhou; Wang, Xiaolong; Xu, Hua; Wang, Baohua; Huang, Dong
2017-10-03
Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement.
Future of Autonomous Ground Logistics: Convoys in the Department of Defense
2011-02-13
Driverless van crosses from Europe to Asia, CNN, October 27, 2010, http://edition.cnn.com/2010/TECH/innovation/10/27/driverless.car/ (accessed November 13...accessed March 13, 2011). Kent, Jo Ling. “ Driverless Van Crosses from Europe to Asia,” CNN, October 27, 2010. http://edition.cnn.com/2010/TECH
Recognizing pedestrian's unsafe behaviors in far-infrared imagery at night
NASA Astrophysics Data System (ADS)
Lee, Eun Ju; Ko, Byoung Chul; Nam, Jae-Yeal
2016-05-01
Pedestrian behavior recognition is important work for early accident prevention in advanced driver assistance system (ADAS). In particular, because most pedestrian-vehicle crashes are occurred from late of night to early of dawn, our study focus on recognizing unsafe behavior of pedestrians using thermal image captured from moving vehicle at night. For recognizing unsafe behavior, this study uses convolutional neural network (CNN) which shows high quality of recognition performance. However, because traditional CNN requires the very expensive training time and memory, we design the light CNN consisted of two convolutional layers and two subsampling layers for real-time processing of vehicle applications. In addition, we combine light CNN with boosted random forest (Boosted RF) classifier so that the output of CNN is not fully connected with the classifier but randomly connected with Boosted random forest. We named this CNN as randomly connected CNN (RC-CNN). The proposed method was successfully applied to the pedestrian unsafe behavior (PUB) dataset captured from far-infrared camera at night and its behavior recognition accuracy is confirmed to be higher than that of some algorithms related to CNNs, with a shorter processing time.
Zhang, Lu-Lu; Zhou, Guan-Qun; Li, Yi-Yang; Tang, Ling-Long; Mao, Yan-Ping; Lin, Ai-Hua; Ma, Jun; Qi, Zhen-Yu; Sun, Ying
2017-12-01
This study investigated the combined prognostic value of pretreatment anemia and cervical node necrosis (CNN) in patients with nasopharyngeal carcinoma (NPC). Retrospective review of 1302 patients with newly diagnosed nonmetastatic NPC treated with intensity-modulated radiotherapy (IMRT) ± chemotherapy. Patients were classified into four groups according to anemia and CNN status. Survival was compared using the log-rank test. Independent prognostic factors were identified using the Cox proportional hazards model. The primary end-point was overall survival (OS); secondary end-points were disease-free survival (DFS), locoregional relapse-free survival (LRRFS), and distant metastasis-free survival (DMFS). Pretreatment anemia was an independent, adverse prognostic factor for DMFS; pretreatment CNN was an independent adverse prognostic factor for all end-points. Five-year survival for non-anemia and non-CNN, anemia, CNN, and anemia and CNN groups were: OS (93.1%, 87.2%, 82.9%, 76.3%, P < 0.001), DFS (87.0%, 84.0%, 73.9%, 64.6%, P < 0.001), DMFS (94.1%, 92.1%, 82.4%, 72.5%, P < 0.001), and LRRFS (92.8%, 92.4%, 88.7%, 84.0%, P = 0.012). The non-anemia and non-CNN group had best survival outcomes; anemia and CNN group, the poorest. Multivariate analysis demonstrated combined anemia and CNN was an independent prognostic factor for OS, DFS, DMFS, and LRRFS (P < 0.05). The combination of anemia and CNN is an independent adverse prognostic factor in patients with NPC treated using IMRT ± chemotherapy. Assessment of pretreatment anemia and CNN improved risk stratification, especially for patients with anemia and CNN who have poorest prognosis. This study may aid the design of individualized treatment plans to improve treatment outcomes. © 2017 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
Nielsen, Anne; Hansen, Mikkel Bo; Tietze, Anna; Mouridsen, Kim
2018-06-01
Treatment options for patients with acute ischemic stroke depend on the volume of salvageable tissue. This volume assessment is currently based on fixed thresholds and single imagine modalities, limiting accuracy. We wish to develop and validate a predictive model capable of automatically identifying and combining acute imaging features to accurately predict final lesion volume. Using acute magnetic resonance imaging, we developed and trained a deep convolutional neural network (CNN deep ) to predict final imaging outcome. A total of 222 patients were included, of which 187 were treated with rtPA (recombinant tissue-type plasminogen activator). The performance of CNN deep was compared with a shallow CNN based on the perfusion-weighted imaging biomarker Tmax (CNN Tmax ), a shallow CNN based on a combination of 9 different biomarkers (CNN shallow ), a generalized linear model, and thresholding of the diffusion-weighted imaging biomarker apparent diffusion coefficient (ADC) at 600×10 -6 mm 2 /s (ADC thres ). To assess whether CNN deep is capable of differentiating outcomes of ±intravenous rtPA, patients not receiving intravenous rtPA were included to train CNN deep, -rtpa to access a treatment effect. The networks' performances were evaluated using visual inspection, area under the receiver operating characteristic curve (AUC), and contrast. CNN deep yields significantly better performance in predicting final outcome (AUC=0.88±0.12) than generalized linear model (AUC=0.78±0.12; P =0.005), CNN Tmax (AUC=0.72±0.14; P <0.003), and ADC thres (AUC=0.66±0.13; P <0.0001) and a substantially better performance than CNN shallow (AUC=0.85±0.11; P =0.063). Measured by contrast, CNN deep improves the predictions significantly, showing superiority to all other methods ( P ≤0.003). CNN deep also seems to be able to differentiate outcomes based on treatment strategy with the volume of final infarct being significantly different ( P =0.048). The considerable prediction improvement accuracy over current state of the art increases the potential for automated decision support in providing recommendations for personalized treatment plans. © 2018 American Heart Association, Inc.
Turkki, Riku; Linder, Nina; Kovanen, Panu E.; Pellinen, Teijo; Lundin, Johan
2016-01-01
Background: Immune cell infiltration in tumor is an emerging prognostic biomarker in breast cancer. The gold standard for quantification of immune cells in tissue sections is visual assessment through a microscope, which is subjective and semi-quantitative. In this study, we propose and evaluate an approach based on antibody-guided annotation and deep learning to quantify immune cell-rich areas in hematoxylin and eosin (H&E) stained samples. Methods: Consecutive sections of formalin-fixed parafin-embedded samples obtained from the primary tumor of twenty breast cancer patients were cut and stained with H&E and the pan-leukocyte CD45 antibody. The stained slides were digitally scanned, and a training set of immune cell-rich and cell-poor tissue regions was annotated in H&E whole-slide images using the CD45-expression as a guide. In analysis, the images were divided into small homogenous regions, superpixels, from which features were extracted using a pretrained convolutional neural network (CNN) and classified with a support of vector machine. The CNN approach was compared to texture-based classification and to visual assessments performed by two pathologists. Results: In a set of 123,442 labeled superpixels, the CNN approach achieved an F-score of 0.94 (range: 0.92–0.94) in discrimination of immune cell-rich and cell-poor regions, as compared to an F-score of 0.88 (range: 0.87–0.89) obtained with the texture-based classification. When compared to visual assessment of 200 images, an agreement of 90% (κ = 0.79) to quantify immune infiltration with the CNN approach was achieved while the inter-observer agreement between pathologists was 90% (κ = 0.78). Conclusions: Our findings indicate that deep learning can be applied to quantify immune cell infiltration in breast cancer samples using a basic morphology staining only. A good discrimination of immune cell-rich areas was achieved, well in concordance with both leukocyte antigen expression and pathologists’ visual assessment. PMID:27688929
Decoding of finger trajectory from ECoG using deep learning.
Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek
2018-06-01
Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.
Decoding of finger trajectory from ECoG using deep learning
NASA Astrophysics Data System (ADS)
Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek
2018-06-01
Objective. Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. Approach. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. Main results. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. Significance. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.
CNN based approach for activity recognition using a wrist-worn accelerometer.
Panwar, Madhuri; Dyuthi, S Ram; Chandra Prakash, K; Biswas, Dwaipayan; Acharyya, Amit; Maharatna, Koushik; Gautam, Arvind; Naik, Ganesh R
2017-07-01
In recent years, significant advancements have taken place in human activity recognition using various machine learning approaches. However, feature engineering have dominated conventional methods involving the difficult process of optimal feature selection. This problem has been mitigated by using a novel methodology based on deep learning framework which automatically extracts the useful features and reduces the computational cost. As a proof of concept, we have attempted to design a generalized model for recognition of three fundamental movements of the human forearm performed in daily life where data is collected from four different subjects using a single wrist worn accelerometer sensor. The validation of the proposed model is done with different pre-processing and noisy data condition which is evaluated using three possible methods. The results show that our proposed methodology achieves an average recognition rate of 99.8% as opposed to conventional methods based on K-means clustering, linear discriminant analysis and support vector machine.
Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong
2018-01-01
The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.
HCP: A Flexible CNN Framework for Multi-label Image Classification.
Wei, Yunchao; Xia, Wei; Lin, Min; Huang, Junshi; Ni, Bingbing; Dong, Jian; Zhao, Yao; Yan, Shuicheng
2015-10-26
Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and/or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5% by HCP only and 93.2% after the fusion with our complementary result in [44] based on hand-crafted features on the VOC 2012 dataset.
S-CNN: Subcategory-aware convolutional networks for object detection.
Chen, Tao; Lu, Shijian; Fan, Jiayuan
2017-09-26
The marriage between the deep convolutional neural network (CNN) and region proposals has made breakthroughs for object detection in recent years. While the discriminative object features are learned via a deep CNN for classification, the large intra-class variation and deformation still limit the performance of the CNN based object detection. We propose a subcategory-aware CNN (S-CNN) to solve the object intra-class variation problem. In the proposed technique, the training samples are first grouped into multiple subcategories automatically through a novel instance sharing maximum margin clustering process. A multi-component Aggregated Channel Feature (ACF) detector is then trained to produce more latent training samples, where each ACF component corresponds to one clustered subcategory. The produced latent samples together with their subcategory labels are further fed into a CNN classifier to filter out false proposals for object detection. An iterative learning algorithm is designed for the joint optimization of image subcategorization, multi-component ACF detector, and subcategory-aware CNN classifier. Experiments on INRIA Person dataset, Pascal VOC 2007 dataset and MS COCO dataset show that the proposed technique clearly outperforms the state-of-the-art methods for generic object detection.
What's so Bad about Being "Professorial"?
ERIC Educational Resources Information Center
Vaidhyanathan, Siva
2008-01-01
CNN commentator Bill Bennett's invocation of "professorial" was the latest among a string of comments about Barack Obama, who used to teach constitutional law at the University of Chicago. On September 13, the "New York Times" columnist Thomas L. Friedman wrote, "Obama may be a bit professorial, but at least he is trying to unite the country to…
NASA Astrophysics Data System (ADS)
Jin, Xiaowei; Cheng, Peng; Chen, Wen-Li; Li, Hui
2018-04-01
A data-driven model is proposed for the prediction of the velocity field around a cylinder by fusion convolutional neural networks (CNNs) using measurements of the pressure field on the cylinder. The model is based on the close relationship between the Reynolds stresses in the wake, the wake formation length, and the base pressure. Numerical simulations of flow around a cylinder at various Reynolds numbers are carried out to establish a dataset capturing the effect of the Reynolds number on various flow properties. The time series of pressure fluctuations on the cylinder is converted into a grid-like spatial-temporal topology to be handled as the input of a CNN. A CNN architecture composed of a fusion of paths with and without a pooling layer is designed. This architecture can capture both accurate spatial-temporal information and the features that are invariant of small translations in the temporal dimension of pressure fluctuations on the cylinder. The CNN is trained using the computational fluid dynamics (CFD) dataset to establish the mapping relationship between the pressure fluctuations on the cylinder and the velocity field around the cylinder. Adam (adaptive moment estimation), an efficient method for processing large-scale and high-dimensional machine learning problems, is employed to implement the optimization algorithm. The trained model is then tested over various Reynolds numbers. The predictions of this model are found to agree well with the CFD results, and the data-driven model successfully learns the underlying flow regimes, i.e., the relationship between wake structure and pressure experienced on the surface of a cylinder is well established.
NASA Astrophysics Data System (ADS)
Kose, Kivanc; Bozkurt, Alican; Ariafar, Setareh; Alessi-Fox, Christi A.; Gill, Melissa; Dy, Jennifer G.; Brooks, Dana H.; Rajadhyaksha, Milind
2017-02-01
In this study we present a deep learning based classification algorithm for discriminating morphological patterns that appear in RCM mosaics of melanocytic lesions collected at the dermal epidermal junction (DEJ). These patterns are classified into 6 distinct types in the literature: background, meshwork, ring, clod, mixed, and aspecific. Clinicians typically identify these morphological patterns by examination of their textural appearance at 10X magnification. To mimic this process we divided mosaics into smaller regions, which we call tiles, and classify each tile in a deep learning framework. We used previously acquired DEJ mosaics of lesions deemed clinically suspicious, from 20 different patients, which were then labelled according to those 6 types by 2 expert users. We tried three different approaches for classification, all starting with a publicly available convolutional neural network (CNN) trained on natural image, consisting of a series of convolutional layers followed by a series of fully connected layers: (1) We fine-tuned this network using training data from the dataset. (2) Instead, we added an additional fully connected layer before the output layer network and then re-trained only last two layers, (3) We used only the CNN convolutional layers as a feature extractor, encoded the features using a bag of words model, and trained a support vector machine (SVM) classifier. Sensitivity and specificity were generally comparable across the three methods, and in the same ranges as our previous work using SURF features with SVM . Approach (3) was less computationally intensive to train but more sensitive to unbalanced representation of the 6 classes in the training data. However we expect CNN performance to improve as we add more training data because both the features and the classifier are learned jointly from the data. *First two authors share first authorship.
Using machine-learning to optimize phase contrast in a low-cost cellphone microscope
Wartmann, Rolf; Schadwinkel, Harald; Heintzmann, Rainer
2018-01-01
Cellphones equipped with high-quality cameras and powerful CPUs as well as GPUs are widespread. This opens new prospects to use such existing computational and imaging resources to perform medical diagnosis in developing countries at a very low cost. Many relevant samples, like biological cells or waterborn parasites, are almost fully transparent. As they do not exhibit absorption, but alter the light’s phase only, they are almost invisible in brightfield microscopy. Expensive equipment and procedures for microscopic contrasting or sample staining often are not available. Dedicated illumination approaches, tailored to the sample under investigation help to boost the contrast. This is achieved by a programmable illumination source, which also allows to measure the phase gradient using the differential phase contrast (DPC) [1, 2] or even the quantitative phase using the derived qDPC approach [3]. By applying machine-learning techniques, such as a convolutional neural network (CNN), it is possible to learn a relationship between samples to be examined and its optimal light source shapes, in order to increase e.g. phase contrast, from a given dataset to enable real-time applications. For the experimental setup, we developed a 3D-printed smartphone microscope for less than 100 $ using off-the-shelf components only such as a low-cost video projector. The fully automated system assures true Koehler illumination with an LCD as the condenser aperture and a reversed smartphone lens as the microscope objective. We show that the effect of a varied light source shape, using the pre-trained CNN, does not only improve the phase contrast, but also the impression of an improvement in optical resolution without adding any special optics, as demonstrated by measurements. PMID:29494620
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
Deep-learning derived features for lung nodule classification with limited datasets
NASA Astrophysics Data System (ADS)
Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.
2018-02-01
Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.
Fast and robust segmentation of the striatum using deep convolutional neural networks.
Choi, Hongyoon; Jin, Kyong Hwan
2016-12-01
Automated segmentation of brain structures is an important task in structural and functional image analysis. We developed a fast and accurate method for the striatum segmentation using deep convolutional neural networks (CNN). T1 magnetic resonance (MR) images were used for our CNN-based segmentation, which require neither image feature extraction nor nonlinear transformation. We employed two serial CNN, Global and Local CNN: The Global CNN determined approximate locations of the striatum. It performed a regression of input MR images fitted to smoothed segmentation maps of the striatum. From the output volume of Global CNN, cropped MR volumes which included the striatum were extracted. The cropped MR volumes and the output volumes of Global CNN were used for inputs of Local CNN. Local CNN predicted the accurate label of all voxels. Segmentation results were compared with a widely used segmentation method, FreeSurfer. Our method showed higher Dice Similarity Coefficient (DSC) (0.893±0.017 vs. 0.786±0.015) and precision score (0.905±0.018 vs. 0.690±0.022) than FreeSurfer-based striatum segmentation (p=0.06). Our approach was also tested using another independent dataset, which showed high DSC (0.826±0.038) comparable with that of FreeSurfer. Comparison with existing method Segmentation performance of our proposed method was comparable with that of FreeSurfer. The running time of our approach was approximately three seconds. We suggested a fast and accurate deep CNN-based segmentation for small brain structures which can be widely applied to brain image analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment.
Kawahara, Jeremy; Brown, Colin J; Miller, Steven P; Booth, Brian G; Chau, Vann; Grunau, Ruth E; Zwicker, Jill G; Hamarneh, Ghassan
2017-02-01
We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain. Copyright © 2016 Elsevier Inc. All rights reserved.
Centrioles regulate centrosome size by controlling the rate of Cnn incorporation into the PCM.
Conduit, Paul T; Brunk, Kathrin; Dobbelaere, Jeroen; Dix, Carly I; Lucas, Eliana P; Raff, Jordan W
2010-12-21
centrosomes are major microtubule organizing centers in animal cells, and they comprise a pair of centrioles surrounded by an amorphous pericentriolar material (PCM). Centrosome size is tightly regulated during the cell cycle, and it has recently been shown that the two centrosomes in certain stem cells are often asymmetric in size. There is compelling evidence that centrioles influence centrosome size, but how centrosome size is set remains mysterious. we show that the conserved Drosophila PCM protein Cnn exhibits an unusual dynamic behavior, because Cnn molecules only incorporate into the PCM closest to the centrioles and then spread outward through the rest of the PCM. Cnn incorporation into the PCM is driven by an interaction with the conserved centriolar proteins Asl (Cep152 in humans) and DSpd-2 (Cep192 in humans). The rate of Cnn incorporation into the PCM is tightly regulated during the cell cycle, and this rate influences the amount of Cnn in the PCM, which in turn is an important determinant of overall centrosome size. Intriguingly, daughter centrioles in syncytial embryos only start to incorporate Cnn as they disengage from their mothers; this generates a centrosome size asymmetry, with mother centrioles always initially organizing more Cnn than their daughters. centrioles can control the amount of PCM they organize by regulating the rate of Cnn incorporation into the PCM. This mechanism can explain how centrosome size is regulated during the cell cycle and also allows mother and daughter centrioles to set centrosome size independently of one another.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahiner, B.; Chan, H.P.; Petrick, N.
1996-10-01
The authors investigated the classification of regions of interest (ROI`s) on mammograms as either mass or normal tissue using a convolution neural network (CNN). A CNN is a back-propagation neural network with two-dimensional (2-D) weight kernels that operate on images. A generalized, fast and stable implementation of the CNN was developed. The input images to the CNN were obtained form the ROI`s using two techniques. The first technique employed averaging and subsampling. The second technique employed texture feature extraction methods applied to small subregions inside the ROI. Features computed over different subregions were arranged as texture images, which were subsequentlymore » used as CNN inputs. The effects of CNN architecture and texture feature parameters on classification accuracy were studied. Receiver operating characteristic (ROC) methodology was used to evaluate the classification accuracy. A data set consisting of 168 ROI`s containing biopsy-proven masses and 504 ROI`s containing normal breast tissue was extracted from 168 mammograms by radiologists experienced in mammography. This data set was used for training and testing the CNN. With the best combination of CNN architecture and texture feature parameters, the area under the test ROC curve reached 0.87, which corresponded to a true-positive fraction of 90% at a false positive fraction of 31%. The results demonstrate the feasibility of using a CNN for classification of masses and normal tissue on mammograms.« less
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Arévalo, John; Judkins, Alexander; Madabhushi, Anant; González, Fabio
2015-12-01
Convolutional neural networks (CNN) have been very successful at addressing different computer vision tasks thanks to their ability to learn image representations directly from large amounts of labeled data. Features learned from a dataset can be used to represent images from a different dataset via an approach called transfer learning. In this paper we apply transfer learning to the challenging task of medulloblastoma tumor differentiation. We compare two different CNN models which were previously trained in two different domains (natural and histopathology images). The first CNN is a state-of-the-art approach in computer vision, a large and deep CNN with 16-layers, Visual Geometry Group (VGG) CNN. The second (IBCa-CNN) is a 2-layer CNN trained for invasive breast cancer tumor classification. Both CNNs are used as visual feature extractors of histopathology image regions of anaplastic and non-anaplastic medulloblastoma tumor from digitized whole-slide images. The features from the two models are used, separately, to train a softmax classifier to discriminate between anaplastic and non-anaplastic medulloblastoma image regions. Experimental results show that the transfer learning approach produce competitive results in comparison with the state of the art approaches for IBCa detection. Results also show that features extracted from the IBCa-CNN have better performance in comparison with features extracted from the VGG-CNN. The former obtains 89.8% while the latter obtains 76.6% in terms of average accuracy.
NASA Astrophysics Data System (ADS)
Chen, C.; Gong, W.; Hu, Y.; Chen, Y.; Ding, Y.
2017-05-01
The automated building detection in aerial images is a fundamental problem encountered in aerial and satellite images analysis. Recently, thanks to the advances in feature descriptions, Region-based CNN model (R-CNN) for object detection is receiving an increasing attention. Despite the excellent performance in object detection, it is problematic to directly leverage the features of R-CNN model for building detection in single aerial image. As we know, the single aerial image is in vertical view and the buildings possess significant directional feature. However, in R-CNN model, direction of the building is ignored and the detection results are represented by horizontal rectangles. For this reason, the detection results with horizontal rectangle cannot describe the building precisely. To address this problem, in this paper, we proposed a novel model with a key feature related to orientation, namely, Oriented R-CNN (OR-CNN). Our contributions are mainly in the following two aspects: 1) Introducing a new oriented layer network for detecting the rotation angle of building on the basis of the successful VGG-net R-CNN model; 2) the oriented rectangle is proposed to leverage the powerful R-CNN for remote-sensing building detection. In experiments, we establish a complete and bran-new data set for training our oriented R-CNN model and comprehensively evaluate the proposed method on a publicly available building detection data set. We demonstrate State-of-the-art results compared with the previous baseline methods.
NASA Astrophysics Data System (ADS)
Zhen, Xin; Chen, Jiawei; Zhong, Zichun; Hrycushko, Brian; Zhou, Linghong; Jiang, Steve; Albuquerque, Kevin; Gu, Xuejun
2017-11-01
Better understanding of the dose-toxicity relationship is critical for safe dose escalation to improve local control in late-stage cervical cancer radiotherapy. In this study, we introduced a convolutional neural network (CNN) model to analyze rectum dose distribution and predict rectum toxicity. Forty-two cervical cancer patients treated with combined external beam radiotherapy (EBRT) and brachytherapy (BT) were retrospectively collected, including twelve toxicity patients and thirty non-toxicity patients. We adopted a transfer learning strategy to overcome the limited patient data issue. A 16-layers CNN developed by the visual geometry group (VGG-16) of the University of Oxford was pre-trained on a large-scale natural image database, ImageNet, and fine-tuned with patient rectum surface dose maps (RSDMs), which were accumulated EBRT + BT doses on the unfolded rectum surface. We used the adaptive synthetic sampling approach and the data augmentation method to address the two challenges, data imbalance and data scarcity. The gradient-weighted class activation maps (Grad-CAM) were also generated to highlight the discriminative regions on the RSDM along with the prediction model. We compare different CNN coefficients fine-tuning strategies, and compare the predictive performance using the traditional dose volume parameters, e.g. D 0.1/1/2cc, and the texture features extracted from the RSDM. Satisfactory prediction performance was achieved with the proposed scheme, and we found that the mean Grad-CAM over the toxicity patient group has geometric consistence of distribution with the statistical analysis result, which indicates possible rectum toxicity location. The evaluation results have demonstrated the feasibility of building a CNN-based rectum dose-toxicity prediction model with transfer learning for cervical cancer radiotherapy.
Zhen, Xin; Chen, Jiawei; Zhong, Zichun; Hrycushko, Brian; Zhou, Linghong; Jiang, Steve; Albuquerque, Kevin; Gu, Xuejun
2017-10-12
Better understanding of the dose-toxicity relationship is critical for safe dose escalation to improve local control in late-stage cervical cancer radiotherapy. In this study, we introduced a convolutional neural network (CNN) model to analyze rectum dose distribution and predict rectum toxicity. Forty-two cervical cancer patients treated with combined external beam radiotherapy (EBRT) and brachytherapy (BT) were retrospectively collected, including twelve toxicity patients and thirty non-toxicity patients. We adopted a transfer learning strategy to overcome the limited patient data issue. A 16-layers CNN developed by the visual geometry group (VGG-16) of the University of Oxford was pre-trained on a large-scale natural image database, ImageNet, and fine-tuned with patient rectum surface dose maps (RSDMs), which were accumulated EBRT + BT doses on the unfolded rectum surface. We used the adaptive synthetic sampling approach and the data augmentation method to address the two challenges, data imbalance and data scarcity. The gradient-weighted class activation maps (Grad-CAM) were also generated to highlight the discriminative regions on the RSDM along with the prediction model. We compare different CNN coefficients fine-tuning strategies, and compare the predictive performance using the traditional dose volume parameters, e.g. D 0.1/1/2cc , and the texture features extracted from the RSDM. Satisfactory prediction performance was achieved with the proposed scheme, and we found that the mean Grad-CAM over the toxicity patient group has geometric consistence of distribution with the statistical analysis result, which indicates possible rectum toxicity location. The evaluation results have demonstrated the feasibility of building a CNN-based rectum dose-toxicity prediction model with transfer learning for cervical cancer radiotherapy.
Region Based CNN for Foreign Object Debris Detection on Airfield Pavement
Cao, Xiaoguang; Wang, Peng; Meng, Cai; Gong, Guoping; Liu, Miaoming; Qi, Jun
2018-01-01
In this paper, a novel algorithm based on convolutional neural network (CNN) is proposed to detect foreign object debris (FOD) based on optical imaging sensors. It contains two modules, the improved region proposal network (RPN) and spatial transformer network (STN) based CNN classifier. In the improved RPN, some extra select rules are designed and deployed to generate high quality candidates with fewer numbers. Moreover, the efficiency of CNN detector is significantly improved by introducing STN layer. Compared to faster R-CNN and single shot multiBox detector (SSD), the proposed algorithm achieves better result for FOD detection on airfield pavement in the experiment. PMID:29494524
Region Based CNN for Foreign Object Debris Detection on Airfield Pavement.
Cao, Xiaoguang; Wang, Peng; Meng, Cai; Bai, Xiangzhi; Gong, Guoping; Liu, Miaoming; Qi, Jun
2018-03-01
In this paper, a novel algorithm based on convolutional neural network (CNN) is proposed to detect foreign object debris (FOD) based on optical imaging sensors. It contains two modules, the improved region proposal network (RPN) and spatial transformer network (STN) based CNN classifier. In the improved RPN, some extra select rules are designed and deployed to generate high quality candidates with fewer numbers. Moreover, the efficiency of CNN detector is significantly improved by introducing STN layer. Compared to faster R-CNN and single shot multiBox detector (SSD), the proposed algorithm achieves better result for FOD detection on airfield pavement in the experiment.
Theorems and application of local activity of CNN with five state variables and one port.
Xiong, Gang; Dong, Xisong; Xie, Li; Yang, Thomas
2012-01-01
Coupled nonlinear dynamical systems have been widely studied recently. However, the dynamical properties of these systems are difficult to deal with. The local activity of cellular neural network (CNN) has provided a powerful tool for studying the emergence of complex patterns in a homogeneous lattice, which is composed of coupled cells. In this paper, the analytical criteria for the local activity in reaction-diffusion CNN with five state variables and one port are presented, which consists of four theorems, including a serial of inequalities involving CNN parameters. These theorems can be used for calculating the bifurcation diagram to determine or analyze the emergence of complex dynamic patterns, such as chaos. As a case study, a reaction-diffusion CNN of hepatitis B Virus (HBV) mutation-selection model is analyzed and simulated, the bifurcation diagram is calculated. Using the diagram, numerical simulations of this CNN model provide reasonable explanations of complex mutant phenomena during therapy. Therefore, it is demonstrated that the local activity of CNN provides a practical tool for the complex dynamics study of some coupled nonlinear systems.
Deformable Image Registration based on Similarity-Steered CNN Regression.
Cao, Xiaohuan; Yang, Jianhua; Zhang, Jun; Nie, Dong; Kim, Min-Jeong; Wang, Qian; Shen, Dinggang
2017-09-01
Existing deformable registration methods require exhaustively iterative optimization, along with careful parameter tuning, to estimate the deformation field between images. Although some learning-based methods have been proposed for initiating deformation estimation, they are often template-specific and not flexible in practical use. In this paper, we propose a convolutional neural network (CNN) based regression model to directly learn the complex mapping from the input image pair (i.e., a pair of template and subject) to their corresponding deformation field. Specifically, our CNN architecture is designed in a patch-based manner to learn the complex mapping from the input patch pairs to their respective deformation field. First, the equalized active-points guided sampling strategy is introduced to facilitate accurate CNN model learning upon a limited image dataset. Then, the similarity-steered CNN architecture is designed, where we propose to add the auxiliary contextual cue, i.e., the similarity between input patches, to more directly guide the learning process. Experiments on different brain image datasets demonstrate promising registration performance based on our CNN model. Furthermore, it is found that the trained CNN model from one dataset can be successfully transferred to another dataset, although brain appearances across datasets are quite variable.
Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung
2018-04-23
In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Agile convolutional neural network for pulmonary nodule classification using CT images.
Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei
2018-04-01
To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.
Kumar, Sumit; Sreenivas, Jayaram; Karthikeyan, Vilvapathy Senguttuvan; Mallya, Ashwin; Keshavamurthy, Ramaiah
2016-10-01
Scoring systems have been devised to predict outcomes of percutaneous nephrolithotomy (PCNL). CROES nephrolithometry nomogram (CNN) is the latest tool devised to predict stone-free rate (SFR). We aim to compare predictive accuracy of CNN against Guy stone score (GSS) for SFR and postoperative outcomes. Between January 2013 and December 2015, 313 patients undergoing PCNL were analyzed for predictive accuracy of GSS, CNN, and stone burden (SB) for SFR, complications, operation time (OT), and length of hospitalization (LOH). We further stratified patients into risk groups based on CNN and GSS. Mean ± standard deviation (SD) SB was 298.8 ± 235.75 mm 2 . SB, GSS, and CNN (area under curve [AUC]: 0.662, 0.660, 0.673) were found to be predictors of SFR. However, predictability for complications was not as good (AUC: SB 0.583, GSS 0.554, CNN 0.580). Single implicated calix (Adj. OR 3.644; p = 0.027), absence of staghorn calculus (Adj. OR 3.091; p = 0.044), single stone (Adj. OR 3.855; p = 0.002), and single puncture (Adj. OR 2.309; p = 0.048) significantly predicted SFR on multivariate analysis. Charlson comorbidity index (CCI; p = 0.020) and staghorn calculus (p = 0.002) were independent predictors for complications on linear regression. SB and GSS independently predicted OT on multivariate analysis. SB and complications significantly predicted LOH, while GSS and CNN did not predict LOH. CNN offered better risk stratification for residual stones than GSS. CNN and GSS have good preoperative predictive accuracy for SFR. Number of implicated calices may affect SFR, and CCI affects complications. Studies should incorporate these factors in scoring systems and assess if predictability of PCNL outcomes improves.
NASA Astrophysics Data System (ADS)
Min, Lequan; Chen, Guanrong
This paper establishes some generalized synchronization (GS) theorems for a coupled discrete array of difference systems (CDADS) and a coupled continuous array of differential systems (CCADS). These constructive theorems provide general representations of GS in CDADS and CCADS. Based on these theorems, one can design GS-driven CDADS and CCADS via appropriate (invertible) transformations. As applications, the results are applied to autonomous and nonautonomous coupled Chen cellular neural network (CNN) CDADS and CCADS, discrete bidirectional Lorenz CNN CDADS, nonautonomous bidirectional Chua CNN CCADS, and nonautonomously bidirectional Chen CNN CDADS and CCADS, respectively. Extensive numerical simulations show their complex dynamic behaviors. These theorems provide new means for understanding the GS phenomena of complex discrete and continuously differentiable networks.
Fang, Leyuan; Cunefare, David; Wang, Chong; Guymer, Robyn H.; Li, Shutao; Farsiu, Sina
2017-01-01
We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique. PMID:28663902
Fang, Leyuan; Cunefare, David; Wang, Chong; Guymer, Robyn H; Li, Shutao; Farsiu, Sina
2017-05-01
We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique.
Video-based convolutional neural networks for activity recognition from robot-centric videos
NASA Astrophysics Data System (ADS)
Ryoo, M. S.; Matthies, Larry
2016-05-01
In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.
Parmaksızoğlu, Selami; Alçı, Mustafa
2011-01-01
Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods.
Parmaksızoğlu, Selami; Alçı, Mustafa
2011-01-01
Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods. PMID:22163903
Rajaraman, Sivaramakrishnan; Antani, Sameer K; Poostchi, Mahdieh; Silamut, Kamolrat; Hossain, Md A; Maude, Richard J; Jaeger, Stefan; Thoma, George R
2018-01-01
Malaria is a blood disease caused by the Plasmodium parasites transmitted through the bite of female Anopheles mosquito. Microscopists commonly examine thick and thin blood smears to diagnose disease and compute parasitemia. However, their accuracy depends on smear quality and expertise in classifying and counting parasitized and uninfected cells. Such an examination could be arduous for large-scale diagnoses resulting in poor quality. State-of-the-art image-analysis based computer-aided diagnosis (CADx) methods using machine learning (ML) techniques, applied to microscopic images of the smears using hand-engineered features demand expertise in analyzing morphological, textural, and positional variations of the region of interest (ROI). In contrast, Convolutional Neural Networks (CNN), a class of deep learning (DL) models promise highly scalable and superior results with end-to-end feature extraction and classification. Automated malaria screening using DL techniques could, therefore, serve as an effective diagnostic aid. In this study, we evaluate the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening. We experimentally determine the optimal model layers for feature extraction from the underlying data. Statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for this purpose.
Zhai, Xiaolong; Jelfs, Beth; Chan, Rosa H. M.; Tin, Chung
2017-01-01
Hand movement classification based on surface electromyography (sEMG) pattern recognition is a promising approach for upper limb neuroprosthetic control. However, maintaining day-to-day performance is challenged by the non-stationary nature of sEMG in real-life operation. In this study, we propose a self-recalibrating classifier that can be automatically updated to maintain a stable performance over time without the need for user retraining. Our classifier is based on convolutional neural network (CNN) using short latency dimension-reduced sEMG spectrograms as inputs. The pretrained classifier is recalibrated routinely using a corrected version of the prediction results from recent testing sessions. Our proposed system was evaluated with the NinaPro database comprising of hand movement data of 40 intact and 11 amputee subjects. Our system was able to achieve ~10.18% (intact, 50 movement types) and ~2.99% (amputee, 10 movement types) increase in classification accuracy averaged over five testing sessions with respect to the unrecalibrated classifier. When compared with a support vector machine (SVM) classifier, our CNN-based system consistently showed higher absolute performance and larger improvement as well as more efficient training. These results suggest that the proposed system can be a useful tool to facilitate long-term adoption of prosthetics for amputees in real-life applications. PMID:28744189
Zhai, Xiaolong; Jelfs, Beth; Chan, Rosa H M; Tin, Chung
2017-01-01
Hand movement classification based on surface electromyography (sEMG) pattern recognition is a promising approach for upper limb neuroprosthetic control. However, maintaining day-to-day performance is challenged by the non-stationary nature of sEMG in real-life operation. In this study, we propose a self-recalibrating classifier that can be automatically updated to maintain a stable performance over time without the need for user retraining. Our classifier is based on convolutional neural network (CNN) using short latency dimension-reduced sEMG spectrograms as inputs. The pretrained classifier is recalibrated routinely using a corrected version of the prediction results from recent testing sessions. Our proposed system was evaluated with the NinaPro database comprising of hand movement data of 40 intact and 11 amputee subjects. Our system was able to achieve ~10.18% (intact, 50 movement types) and ~2.99% (amputee, 10 movement types) increase in classification accuracy averaged over five testing sessions with respect to the unrecalibrated classifier. When compared with a support vector machine (SVM) classifier, our CNN-based system consistently showed higher absolute performance and larger improvement as well as more efficient training. These results suggest that the proposed system can be a useful tool to facilitate long-term adoption of prosthetics for amputees in real-life applications.
NASA Astrophysics Data System (ADS)
Park, Eunsu; Moon, Yong-Jae
2017-08-01
A Convolutional Neural Network(CNN) is one of the well-known deep-learning methods in image processing and computer vision area. In this study, we apply CNN to two kinds of flare forecasting models: flare classification and occurrence. For this, we consider several pre-trained models (e.g., AlexNet, GoogLeNet, and ResNet) and customize them by changing several options such as the number of layers, activation function, and optimizer. Our inputs are the same number of SOHO)/MDI images for each flare class (None, C, M and X) at 00:00 UT from Jan 1996 to Dec 2010 (total 1600 images). Outputs are the results of daily flare forecasting for flare class and occurrence. We build, train, and test the models on TensorFlow, which is well-known machine learning software library developed by Google. Our major results from this study are as follows. First, most of the models have accuracies more than 0.7. Second, ResNet developed by Microsoft has the best accuracies : 0.86 for flare classification and 0.84 for flare occurrence. Third, the accuracies of these models vary greatly with changing parameters. We discuss several possibilities to improve the models.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
Dynamic frame resizing with convolutional neural network for efficient video compression
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon
2017-09-01
In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.
CNN Newsroom Classroom Guides. September 1997.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
CNN Newsroom is a daily 15-minute news program specifically produced for classroom use and provided free to participating schools. These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for September 2-29, 1997, provide program rundowns, suggestions for class activities and discussion, student…
Utilization of CNN Newsroom in School Classrooms.
ERIC Educational Resources Information Center
Jordan, Sandra S.
This study was an educational assessment of the CNN (Cable News Network) Newsroom by enrolled users throughout the state of Georgia. CNN Newsroom is a 15-minute commercial-free newscast aimed at students in public school classrooms. Supplementing the newscasts are daily curriculum guides (available electronically) that outline questions and…
CNN Newsroom Classroom Guides. January 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
CNN Newsroom is a daily 15-minute commercial-free news program specifically produced for classroom use and provided free to participating schools. These Daily Classroom Guides support broadcasts of CNN Newsroom for January 1999. Each guide contains program rundowns for that day's broadcast, discussion activities, and links to external Web sites.…
Memristor-based cellular nonlinear/neural network: design, analysis, and applications.
Duan, Shukai; Hu, Xiaofang; Dong, Zhekang; Wang, Lidan; Mazumder, Pinaki
2015-06-01
Cellular nonlinear/neural network (CNN) has been recognized as a powerful massively parallel architecture capable of solving complex engineering problems by performing trillions of analog operations per second. The memristor was theoretically predicted in the late seventies, but it garnered nascent research interest due to the recent much-acclaimed discovery of nanocrossbar memories by engineers at the Hewlett-Packard Laboratory. The memristor is expected to be co-integrated with nanoscale CMOS technology to revolutionize conventional von Neumann as well as neuromorphic computing. In this paper, a compact CNN model based on memristors is presented along with its performance analysis and applications. In the new CNN design, the memristor bridge circuit acts as the synaptic circuit element and substitutes the complex multiplication circuit used in traditional CNN architectures. In addition, the negative differential resistance and nonlinear current-voltage characteristics of the memristor have been leveraged to replace the linear resistor in conventional CNNs. The proposed CNN design has several merits, for example, high density, nonvolatility, and programmability of synaptic weights. The proposed memristor-based CNN design operations for implementing several image processing functions are illustrated through simulation and contrasted with conventional CNNs. Monte-Carlo simulation has been used to demonstrate the behavior of the proposed CNN due to the variations in memristor synaptic weights.
Reduction of metal artifacts in x-ray CT images using a convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Yanbo; Chu, Ying; Yu, Hengyong
2017-09-01
Patients usually contain various metallic implants (e.g. dental fillings, prostheses), causing severe artifacts in the x-ray CT images. Although a large number of metal artifact reduction (MAR) methods have been proposed in the past four decades, MAR is still one of the major problems in clinical x-ray CT. In this work, we develop a convolutional neural network (CNN) based MAR framework, which combines the information from the original and corrected images to suppress artifacts. Before the MAR, we generate a group of data and train a CNN. First, we numerically simulate various metal artifacts cases and build a dataset, which includes metal-free images (used as references), metal-inserted images and various MAR methods corrected images. Then, ten thousands patches are extracted from the databased to train the metal artifact reduction CNN. In the MAR stage, the original image and two corrected images are stacked as a three-channel input image for CNN, and a CNN image is generated with less artifacts. The water equivalent regions in the CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the metal affected projections, followed by the FBP reconstruction. Experimental results demonstrate the superior metal artifact reduction capability of the proposed method to its competitors.
Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.
Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo
2018-01-01
Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Lv, Jun; Yang, Ming; Zhang, Jue; Wang, Xiaoying
2018-02-01
Free-breathing abdomen imaging requires non-rigid motion registration of unavoidable respiratory motion in three-dimensional undersampled data sets. In this work, we introduce an image registration method based on the convolutional neural network (CNN) to obtain motion-free abdominal images throughout the respiratory cycle. Abdominal data were acquired from 10 volunteers using a 1.5 T MRI system. The respiratory signal was extracted from the central-space spokes, and the acquired data were reordered in three bins according to the corresponding breathing signal. Retrospective image reconstruction of the three near-motion free respiratory phases was performed using non-Cartesian iterative SENSE reconstruction. Then, we trained a CNN to analyse the spatial transform among the different bins. This network could generate the displacement vector field and be applied to perform registration on unseen image pairs. To demonstrate the feasibility of this registration method, we compared the performance of three different registration approaches for accurate image fusion of three bins: non-motion corrected (NMC), local affine registration method (LREG) and CNN. Visualization of coronal images indicated that LREG had caused broken blood vessels, while the vessels of the CNN were sharper and more consecutive. As shown in the sagittal view, compared to NMC and CNN, distorted and blurred liver contours were caused by LREG. At the same time, zoom-in axial images presented that the vessels were delineated more clearly by CNN than LREG. The statistical results of the signal-to-noise ratio, visual score, vessel sharpness and registration time over all volunteers were compared among the NMC, LREG and CNN approaches. The SNR indicated that the CNN acquired the best image quality (207.42 ± 96.73), which was better than NMC (116.67 ± 44.70) and LREG (187.93 ± 96.68). The image visual score agreed with SNR, marking CNN (3.85 ± 0.12) as the best, followed by LREG (3.43 ± 0.13) and NMC (2.55 ± 0.09). A vessel sharpness assessment yielded similar values between the CNN (0.81 ± 0.03) and LREG (0.80 ± 0.04), differentiating them from the NMC (0.78 ± 0.06). When compared with the LREG-based reconstruction, the CNN-based reconstruction reduces the registration time from 1 h to 1 min. Our preliminary results demonstrate the feasibility of the CNN-based approach, and this scheme outperforms the NMC- and LREG-based methods. Advances in knowledge: This method reduces the registration time from ~1 h to ~1 min, which has promising prospects for clinical use. To the best of our knowledge, this study shows the first convolutional neural network-based registration method to be applied in abdominal images.
Channel One and CNN Newsroom: A Comparative Study of Seven Districts.
ERIC Educational Resources Information Center
Nasstrom, Roy; Gierok, Anne
Many American schools use the televised news programs Channel One and CNN Newsroom. Channel One has received considerable scrutiny, some of it highly unfavorable, while attention to CNN Newsroom has been less extensive and mostly benign. This study compares the two programs within seven school districts in Wisconsin. The study addresses three…
CNN-SVM for Microvascular Morphological Type Recognition with Data Augmentation.
Xue, Di-Xiu; Zhang, Rong; Feng, Hui; Wang, Ya-Lei
2016-01-01
This paper focuses on the problem of feature extraction and the classification of microvascular morphological types to aid esophageal cancer detection. We present a patch-based system with a hybrid SVM model with data augmentation for intraepithelial papillary capillary loop recognition. A greedy patch-generating algorithm and a specialized CNN named NBI-Net are designed to extract hierarchical features from patches. We investigate a series of data augmentation techniques to progressively improve the prediction invariance of image scaling and rotation. For classifier boosting, SVM is used as an alternative to softmax to enhance generalization ability. The effectiveness of CNN feature representation ability is discussed for a set of widely used CNN models, including AlexNet, VGG-16, and GoogLeNet. Experiments are conducted on the NBI-ME dataset. The recognition rate is up to 92.74% on the patch level with data augmentation and classifier boosting. The results show that the combined CNN-SVM model beats models of traditional features with SVM as well as the original CNN with softmax. The synthesis results indicate that our system is able to assist clinical diagnosis to a certain extent.
Image-based corrosion recognition for ship steel structures
NASA Astrophysics Data System (ADS)
Ma, Yucong; Yang, Yang; Yao, Yuan; Li, Shengyuan; Zhao, Xuefeng
2018-03-01
Ship structures are subjected to corrosion inevitably in service. Existed image-based methods are influenced by the noises in images because they recognize corrosion by extracting features. In this paper, a novel method of image-based corrosion recognition for ship steel structures is proposed. The method utilizes convolutional neural networks (CNN) and will not be affected by noises in images. A CNN used to recognize corrosion was designed through fine-turning an existing CNN architecture and trained by datasets built using lots of images. Combining the trained CNN classifier with a sliding window technique, the corrosion zone in an image can be recognized.
Zhao, Yongjia; Zhou, Suiping
2017-02-28
The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN's input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns.
NASA Astrophysics Data System (ADS)
Singh, Shiwangi; Bard, Deborah
2017-01-01
Weak gravitational lensing is an effective tool to map the structure of matter in the universe, and has been used for more than ten years as a probe of the nature of dark energy. Beyond the well-established two-point summary statistics, attention is now turning to methods that use the full statistical information available in the lensing observables, through analysis of the reconstructed shear field. This offers an opportunity to take advantage of powerful deep learning methods for image analysis. We present two early studies that demonstrate that deep learning can be used to characterise features in weak lensing convergence maps, and to identify the underlying cosmological model that produced them.We developed an unsupervised Denoising Convolutional Autoencoder model in order to learn an abstract representation directly from our data. This model uses a convolution-deconvolution architecture, which is fed with input data (corrupted with binomial noise to prevent over-fitting). Our model effectively trains itself to minimize the mean-squared error between the input and the output using gradient descent, resulting in a model which, theoretically, is broad enough to tackle other similarly structured problems. Using this model we were able to successfully reconstruct simulated convergence maps and identify the structures in them. We also determined which structures had the highest “importance” - i.e. which structures were most typical of the data. We note that the structures that had the highest importance in our reconstruction were around high mass concentrations, but were highly non-Gaussian.We also developed a supervised Convolutional Neural Network (CNN) for classification of weak lensing convergence maps from two different simulated theoretical models. The CNN uses a softmax classifier which minimizes a binary cross-entropy loss between the estimated distribution and true distribution. In other words, given an unseen convergence map the trained CNN determines probabilistically which theoretical model fits the data best. This preliminary work demonstrates that we can classify the cosmological model that produced the convergence maps with 80% accuracy.
NASA Astrophysics Data System (ADS)
Wang, Dongyi; Vinson, Robert; Holmes, Maxwell; Seibel, Gary; Tao, Yang
2018-04-01
The Atlantic blue crab is among the highest-valued seafood found in the American Eastern Seaboard. Currently, the crab processing industry is highly dependent on manual labor. However, there is great potential for vision-guided intelligent machines to automate the meat picking process. Studies show that the back-fin knuckles are robust features containing information about a crab's size, orientation, and the position of the crab's meat compartments. Our studies also make it clear that detecting the knuckles reliably in images is challenging due to the knuckle's small size, anomalous shape, and similarity to joints in the legs and claws. An accurate and reliable computer vision algorithm was proposed to detect the crab's back-fin knuckles in digital images. Convolutional neural networks (CNNs) can localize rough knuckle positions with 97.67% accuracy, transforming a global detection problem into a local detection problem. Compared to the rough localization based on human experience or other machine learning classification methods, the CNN shows the best localization results. In the rough knuckle position, a k-means clustering method is able to further extract the exact knuckle positions based on the back-fin knuckle color features. The exact knuckle position can help us to generate a crab cutline in XY plane using a template matching method. This is a pioneering research project in crab image analysis and offers advanced machine intelligence for automated crab processing.
NASA Astrophysics Data System (ADS)
Travis, B. J.; Sauer, J.; Dubey, M. K.
2017-12-01
Methane (CH4) leaks from oil and gas production fields are a potentially significant source of atmospheric methane. US DOE's ARPA-E office is supporting research to locate methane emissions at 10 m size well pads to within 1 m. A team led by Aeris Technologies, and that includes LANL, Planetary Science Institute and Rice University has developed an autonomous leak detection system (LDS) employing a compact laser absorption methane sensor, a sonic anemometer and multiport sampling. The LDS system analyzes monitoring data using a convolutional neural network (cNN) to locate and quantify CH4 emissions. The cNN was trained using three sources: (1) ultra-high-resolution simulations of methane transport provided by LANL's coupled atmospheric transport model HIGRAD, for numerous controlled methane release scenarios and methane sampling configurations under variable atmospheric conditions, (2) Field tests at the METEC site in Ft. Collins, CO., and (3) Field data from other sites where point-source surface methane releases were monitored downwind. A cNN learning algorithm is well suited to problems in which the training and observed data are noisy, or correspond to complex sensor data as is typical of meteorological and sensor data over a well pad. Recent studies with our cNN emphasize the importance of tracking wind speeds and directions at fine resolution ( 1 second), and accounting for variations in background CH4 levels. A few cases illustrate the importance of sufficiently long monitoring; short monitoring may not provide enough information to determine accurately a leak location or strength, mainly because of short-term unfavorable wind directions and choice of sampling configuration. Length of multiport duty cycle sampling and sample line flush time as well as number and placement of monitoring sensors can significantly impact ability to locate and quantify leaks. Source location error at less than 10% requires about 30 or more training cases.
Automated image quality evaluation of T2 -weighted liver MRI utilizing deep learning architecture.
Esses, Steven J; Lu, Xiaoguang; Zhao, Tiejun; Shanbhogue, Krishna; Dane, Bari; Bruno, Mary; Chandarana, Hersh
2018-03-01
To develop and test a deep learning approach named Convolutional Neural Network (CNN) for automated screening of T 2 -weighted (T 2 WI) liver acquisitions for nondiagnostic images, and compare this automated approach to evaluation by two radiologists. We evaluated 522 liver magnetic resonance imaging (MRI) exams performed at 1.5T and 3T at our institution between November 2014 and May 2016 for CNN training and validation. The CNN consisted of an input layer, convolutional layer, fully connected layer, and output layer. 351 T 2 WI were anonymized for training. Each case was annotated with a label of being diagnostic or nondiagnostic for detecting lesions and assessing liver morphology. Another independently collected 171 cases were sequestered for a blind test. These 171 T 2 WI were assessed independently by two radiologists and annotated as being diagnostic or nondiagnostic. These 171 T 2 WI were presented to the CNN algorithm and image quality (IQ) output of the algorithm was compared to that of two radiologists. There was concordance in IQ label between Reader 1 and CNN in 79% of cases and between Reader 2 and CNN in 73%. The sensitivity and the specificity of the CNN algorithm in identifying nondiagnostic IQ was 67% and 81% with respect to Reader 1 and 47% and 80% with respect to Reader 2. The negative predictive value of the algorithm for identifying nondiagnostic IQ was 94% and 86% (relative to Readers 1 and 2). We demonstrate a CNN algorithm that yields a high negative predictive value when screening for nondiagnostic T 2 WI of the liver. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:723-728. © 2017 International Society for Magnetic Resonance in Medicine.
Exploiting the potential of unlabeled endoscopic video data with self-supervised learning.
Ross, Tobias; Zimmerer, David; Vemuri, Anant; Isensee, Fabian; Wiesenfarth, Manuel; Bodenstedt, Sebastian; Both, Fabian; Kessler, Philip; Wagner, Martin; Müller, Beat; Kenngott, Hannes; Speidel, Stefanie; Kopp-Schneider, Annette; Maier-Hein, Klaus; Maier-Hein, Lena
2018-06-01
Surgical data science is a new research field that aims to observe all aspects of the patient treatment process in order to provide the right assistance at the right time. Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is becoming a major bottleneck in the field. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue. Our approach is guided by the hypothesis that unlabeled video data can be used to learn a representation of the target domain that boosts the performance of state-of-the-art machine learning algorithms when used for pre-training. Core of the method is an auxiliary task based on raw endoscopic video data of the target domain that is used to initialize the convolutional neural network (CNN) for the target task. In this paper, we propose the re-colorization of medical images with a conditional generative adversarial network (cGAN)-based architecture as auxiliary task. A variant of the method involves a second pre-training step based on labeled data for the target task from a related domain. We validate both variants using medical instrument segmentation as target task. The proposed approach can be used to radically reduce the manual annotation effort involved in training CNNs. Compared to the baseline approach of generating annotated data from scratch, our method decreases exploratively the number of labeled images by up to 75% without sacrificing performance. Our method also outperforms alternative methods for CNN pre-training, such as pre-training on publicly available non-medical (COCO) or medical data (MICCAI EndoVis2017 challenge) using the target task (in this instance: segmentation). As it makes efficient use of available (non-)public and (un-)labeled data, the approach has the potential to become a valuable tool for CNN (pre-)training.
Choi, Jin Woo; Ku, Yunseo; Yoo, Byeong Wook; Kim, Jung-Ah; Lee, Dong Soon; Chai, Young Jun; Kong, Hyoun-Joong; Kim, Hee Chan
2017-01-01
The white blood cell differential count of the bone marrow provides information concerning the distribution of immature and mature cells within maturation stages. The results of such examinations are important for the diagnosis of various diseases and for follow-up care after chemotherapy. However, manual, labor-intensive methods to determine the differential count lead to inter- and intra-variations among the results obtained by hematologists. Therefore, an automated system to conduct the white blood cell differential count is highly desirable, but several difficulties hinder progress. There are variations in the white blood cells of each maturation stage, small inter-class differences within each stage, and variations in images because of the different acquisition and staining processes. Moreover, a large number of classes need to be classified for bone marrow smear analysis, and the high density of touching cells in bone marrow smears renders difficult the segmentation of single cells, which is crucial to traditional image processing and machine learning. Few studies have attempted to discriminate bone marrow cells, and even these have either discriminated only a few classes or yielded insufficient performance. In this study, we propose an automated white blood cell differential counting system from bone marrow smear images using a dual-stage convolutional neural network (CNN). A total of 2,174 patch images were collected for training and testing. The dual-stage CNN classified images into 10 classes of the myeloid and erythroid maturation series, and achieved an accuracy of 97.06%, a precision of 97.13%, a recall of 97.06%, and an F-1 score of 97.1%. The proposed method not only showed high classification performance, but also successfully classified raw images without single cell segmentation and manual feature extraction by implementing CNN. Moreover, it demonstrated rotation and location invariance. These results highlight the promise of the proposed method as an automated white blood cell differential count system.
Choi, Jin Woo; Ku, Yunseo; Yoo, Byeong Wook; Kim, Jung-Ah; Lee, Dong Soon; Chai, Young Jun; Kong, Hyoun-Joong
2017-01-01
The white blood cell differential count of the bone marrow provides information concerning the distribution of immature and mature cells within maturation stages. The results of such examinations are important for the diagnosis of various diseases and for follow-up care after chemotherapy. However, manual, labor-intensive methods to determine the differential count lead to inter- and intra-variations among the results obtained by hematologists. Therefore, an automated system to conduct the white blood cell differential count is highly desirable, but several difficulties hinder progress. There are variations in the white blood cells of each maturation stage, small inter-class differences within each stage, and variations in images because of the different acquisition and staining processes. Moreover, a large number of classes need to be classified for bone marrow smear analysis, and the high density of touching cells in bone marrow smears renders difficult the segmentation of single cells, which is crucial to traditional image processing and machine learning. Few studies have attempted to discriminate bone marrow cells, and even these have either discriminated only a few classes or yielded insufficient performance. In this study, we propose an automated white blood cell differential counting system from bone marrow smear images using a dual-stage convolutional neural network (CNN). A total of 2,174 patch images were collected for training and testing. The dual-stage CNN classified images into 10 classes of the myeloid and erythroid maturation series, and achieved an accuracy of 97.06%, a precision of 97.13%, a recall of 97.06%, and an F-1 score of 97.1%. The proposed method not only showed high classification performance, but also successfully classified raw images without single cell segmentation and manual feature extraction by implementing CNN. Moreover, it demonstrated rotation and location invariance. These results highlight the promise of the proposed method as an automated white blood cell differential count system. PMID:29228051
Research on High Accuracy Detection of Red Tide Hyperspecrral Based on Deep Learning Cnn
NASA Astrophysics Data System (ADS)
Hu, Y.; Ma, Y.; An, J.
2018-04-01
Increasing frequency in red tide outbreaks has been reported around the world. It is of great concern due to not only their adverse effects on human health and marine organisms, but also their impacts on the economy of the affected areas. this paper put forward a high accuracy detection method based on a fully-connected deep CNN detection model with 8-layers to monitor red tide in hyperspectral remote sensing images, then make a discussion of the glint suppression method for improving the accuracy of red tide detection. The results show that the proposed CNN hyperspectral detection model can detect red tide accurately and effectively. The red tide detection accuracy of the proposed CNN model based on original image and filter-image is 95.58 % and 97.45 %, respectively, and compared with the SVM method, the CNN detection accuracy is increased by 7.52 % and 2.25 %. Compared with SVM method base on original image, the red tide CNN detection accuracy based on filter-image increased by 8.62 % and 6.37 %. It also indicates that the image glint affects the accuracy of red tide detection seriously.
Vivanti, Refael; Joskowicz, Leo; Lev-Cohain, Naama; Ephrat, Ariel; Sosna, Jacob
2018-03-10
Radiological longitudinal follow-up of tumors in CT scans is essential for disease assessment and liver tumor therapy. Currently, most tumor size measurements follow the RECIST guidelines, which can be off by as much as 50%. True volumetric measurements are more accurate but require manual delineation, which is time-consuming and user-dependent. We present a convolutional neural networks (CNN) based method for robust automatic liver tumor delineation in longitudinal CT studies that uses both global and patient specific CNNs trained on a small database of delineated images. The inputs are the baseline scan and the tumor delineation, a follow-up scan, and a liver tumor global CNN voxel classifier built from radiologist-validated liver tumor delineations. The outputs are the tumor delineations in the follow-up CT scan. The baseline scan tumor delineation serves as a high-quality prior for the tumor characterization in the follow-up scans. It is used to evaluate the global CNN performance on the new case and to reliably predict failures of the global CNN on the follow-up scan. High-scoring cases are segmented with a global CNN; low-scoring cases, which are predicted to be failures of the global CNN, are segmented with a patient-specific CNN built from the baseline scan. Our experimental results on 222 tumors from 31 patients yield an average overlap error of 17% (std = 11.2) and surface distance of 2.1 mm (std = 1.8), far better than stand-alone segmentation. Importantly, the robustness of our method improved from 67% for stand-alone global CNN segmentation to 100%. Unlike other medical imaging deep learning approaches, which require large annotated training datasets, our method exploits the follow-up framework to yield accurate tumor tracking and failure detection and correction with a small training dataset. Graphical abstract Flow diagram of the proposed method. In the offline mode (orange), a global CNN is trained as a voxel classifier to segment liver tumor as in [31]. The online mode (blue) is used for each new case. The input is baseline scan with delineation and the follow-up CT scan to be segmented. The main novelty is the ability to predict failures by trying the system on the baseline scan and the ability to correct them using the patient-specific CNN.
Deep Learning to Classify Radiology Free-Text Reports.
Chen, Matthew C; Ball, Robyn L; Yang, Lingyao; Moradzadeh, Nathaniel; Chapman, Brian E; Larson, David B; Langlotz, Curtis P; Amrhein, Timothy J; Lungren, Matthew P
2018-03-01
Purpose To evaluate the performance of a deep learning convolutional neural network (CNN) model compared with a traditional natural language processing (NLP) model in extracting pulmonary embolism (PE) findings from thoracic computed tomography (CT) reports from two institutions. Materials and Methods Contrast material-enhanced CT examinations of the chest performed between January 1, 1998, and January 1, 2016, were selected. Annotations by two human radiologists were made for three categories: the presence, chronicity, and location of PE. Classification of performance of a CNN model with an unsupervised learning algorithm for obtaining vector representations of words was compared with the open-source application PeFinder. Sensitivity, specificity, accuracy, and F1 scores for both the CNN model and PeFinder in the internal and external validation sets were determined. Results The CNN model demonstrated an accuracy of 99% and an area under the curve value of 0.97. For internal validation report data, the CNN model had a statistically significant larger F1 score (0.938) than did PeFinder (0.867) when classifying findings as either PE positive or PE negative, but no significant difference in sensitivity, specificity, or accuracy was found. For external validation report data, no statistical difference between the performance of the CNN model and PeFinder was found. Conclusion A deep learning CNN model can classify radiology free-text reports with accuracy equivalent to or beyond that of an existing traditional NLP model. © RSNA, 2017 Online supplemental material is available for this article.
ERIC Educational Resources Information Center
Journell, Wayne
2014-01-01
This article describes a research study on the appropriateness for social studies classrooms of "CNN Student News," a free online news program specifically aimed at middle and high school students. The author conducted a content analysis of "CNN Student News" during October 2012 and evaluated the program's content for…
Nahid, Abdullah-Al; Mehrabi, Mohamad Ali; Kong, Yinan
2018-01-01
Breast Cancer is a serious threat and one of the largest causes of death of women throughout the world. The identification of cancer largely depends on digital biomedical photography analysis such as histopathological images by doctors and physicians. Analyzing histopathological images is a nontrivial task, and decisions from investigation of these kinds of images always require specialised knowledge. However, Computer Aided Diagnosis (CAD) techniques can help the doctor make more reliable decisions. The state-of-the-art Deep Neural Network (DNN) has been recently introduced for biomedical image analysis. Normally each image contains structural and statistical information. This paper classifies a set of biomedical breast cancer images (BreakHis dataset) using novel DNN techniques guided by structural and statistical information derived from the images. Specifically a Convolutional Neural Network (CNN), a Long-Short-Term-Memory (LSTM), and a combination of CNN and LSTM are proposed for breast cancer image classification. Softmax and Support Vector Machine (SVM) layers have been used for the decision-making stage after extracting features utilising the proposed novel DNN models. In this experiment the best Accuracy value of 91.00% is achieved on the 200x dataset, the best Precision value 96.00% is achieved on the 40x dataset, and the best F -Measure value is achieved on both the 40x and 100x datasets.
NASA Astrophysics Data System (ADS)
Mahesh, A.; Mudigonda, M.; Kim, S. K.; Kashinath, K.; Kahou, S.; Michalski, V.; Williams, D. N.; Liu, Y.; Prabhat, M.; Loring, B.; O'Brien, T. A.; Collins, W. D.
2017-12-01
Atmospheric rivers (ARs) can be the difference between CA facing drought or hurricane-level storms. ARs are a form of extreme weather defined as long, narrow columns of moisture which transport water vapor outside the tropics. When they make landfall, they release the vapor as rain or snow. Convolutional neural networks (CNNs), a machine learning technique that uses filters to recognize features, are the leading computer vision mechanism for classifying multichannel images. CNNs have been proven to be effective in identifying extreme weather events in climate simulation output (Liu et. al. 2016, ABDA'16, http://bit.ly/2hlrFNV). Here, we compare three different CNN architectures, tuned with different hyperparameters and training schemes. We compare two-layer, three-layer, four-layer, and sixteen-layer CNNs' ability to recognize ARs in Community Atmospheric Model version 5 output, and we explore the ability of data augmentation and pre-trained models to increase the accuracy of the classifier. Because pre-training the model with regular images (i.e. benches, stoves, and dogs) yielded the highest accuracy rate, this strategy, also known as transfer learning, may be vital in future scientific CNNs, which likely will not have access to a large labelled training dataset. By choosing the most effective CNN architecture, climate scientists can build an accurate historical database of ARs, which can be used to develop a predictive understanding of these phenomena.
Deep convolutional neural network for prostate MR segmentation
NASA Astrophysics Data System (ADS)
Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei
2017-03-01
Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.
Small-size pedestrian detection in large scene based on fast R-CNN
NASA Astrophysics Data System (ADS)
Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu
2018-04-01
Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.
Interphase centrosome organization by the PLP-Cnn scaffold is required for centrosome function
Lerit, Dorothy A.; Jordan, Holly A.; Poulton, John S.; Fagerstrom, Carey J.; Galletta, Brian J.; Peifer, Mark
2015-01-01
Pericentriolar material (PCM) mediates the microtubule (MT) nucleation and anchoring activity of centrosomes. A scaffold organized by Centrosomin (Cnn) serves to ensure proper PCM architecture and functional changes in centrosome activity with each cell cycle. Here, we investigate the mechanisms that spatially restrict and temporally coordinate centrosome scaffold formation. Focusing on the mitotic-to-interphase transition in Drosophila melanogaster embryos, we show that the elaboration of the interphase Cnn scaffold defines a major structural rearrangement of the centrosome. We identify an unprecedented role for Pericentrin-like protein (PLP), which localizes to the tips of extended Cnn flares, to maintain robust interphase centrosome activity and promote the formation of interphase MT asters required for normal nuclear spacing, centrosome segregation, and compartmentalization of the syncytial embryo. Our data reveal that Cnn and PLP directly interact at two defined sites to coordinate the cell cycle–dependent rearrangement and scaffolding activity of the centrosome to permit normal centrosome organization, cell division, and embryonic viability. PMID:26150390
Interphase centrosome organization by the PLP-Cnn scaffold is required for centrosome function.
Lerit, Dorothy A; Jordan, Holly A; Poulton, John S; Fagerstrom, Carey J; Galletta, Brian J; Peifer, Mark; Rusan, Nasser M
2015-07-06
Pericentriolar material (PCM) mediates the microtubule (MT) nucleation and anchoring activity of centrosomes. A scaffold organized by Centrosomin (Cnn) serves to ensure proper PCM architecture and functional changes in centrosome activity with each cell cycle. Here, we investigate the mechanisms that spatially restrict and temporally coordinate centrosome scaffold formation. Focusing on the mitotic-to-interphase transition in Drosophila melanogaster embryos, we show that the elaboration of the interphase Cnn scaffold defines a major structural rearrangement of the centrosome. We identify an unprecedented role for Pericentrin-like protein (PLP), which localizes to the tips of extended Cnn flares, to maintain robust interphase centrosome activity and promote the formation of interphase MT asters required for normal nuclear spacing, centrosome segregation, and compartmentalization of the syncytial embryo. Our data reveal that Cnn and PLP directly interact at two defined sites to coordinate the cell cycle-dependent rearrangement and scaffolding activity of the centrosome to permit normal centrosome organization, cell division, and embryonic viability.
NASA Astrophysics Data System (ADS)
Allman, Derek; Reiter, Austin; Bell, Muyinatu
2018-02-01
We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.
Bio-inspired nano-sensor-enhanced CNN visual computer.
Porod, Wolfgang; Werblin, Frank; Chua, Leon O; Roska, Tamas; Rodriguez-Vazquez, Angel; Roska, Botond; Fay, Patrick; Bernstein, Gary H; Huang, Yih-Fang; Csurgay, Arpad I
2004-05-01
Nanotechnology opens new ways to utilize recent discoveries in biological image processing by translating the underlying functional concepts into the design of CNN (cellular neural/nonlinear network)-based systems incorporating nanoelectronic devices. There is a natural intersection joining studies of retinal processing, spatio-temporal nonlinear dynamics embodied in CNN, and the possibility of miniaturizing the technology through nanotechnology. This intersection serves as the springboard for our multidisciplinary project. Biological feature and motion detectors map directly into the spatio-temporal dynamics of CNN for target recognition, image stabilization, and tracking. The neural interactions underlying color processing will drive the development of nanoscale multispectral sensor arrays for image fusion. Implementing such nanoscale sensors on a CNN platform will allow the implementation of device feedback control, a hallmark of biological sensory systems. These biologically inspired CNN subroutines are incorporated into the new world of analog-and-logic algorithms and software, containing also many other active-wave computing mechanisms, including nature-inspired (physics and chemistry) as well as PDE-based sophisticated spatio-temporal algorithms. Our goal is to design and develop several miniature prototype devices for target detection, navigation, tracking, and robotics. This paper presents an example illustrating the synergies emerging from the convergence of nanotechnology, biotechnology, and information and cognitive science.
Classification of Land Cover and Land Use Based on Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Yang, Chun; Rottensteiner, Franz; Heipke, Christian
2018-04-01
Land cover describes the physical material of the earth's surface, whereas land use describes the socio-economic function of a piece of land. Land use information is typically collected in geospatial databases. As such databases become outdated quickly, an automatic update process is required. This paper presents a new approach to determine land cover and to classify land use objects based on convolutional neural networks (CNN). The input data are aerial images and derived data such as digital surface models. Firstly, we apply a CNN to determine the land cover for each pixel of the input image. We compare different CNN structures, all of them based on an encoder-decoder structure for obtaining dense class predictions. Secondly, we propose a new CNN-based methodology for the prediction of the land use label of objects from a geospatial database. In this context, we present a strategy for generating image patches of identical size from the input data, which are classified by a CNN. Again, we compare different CNN architectures. Our experiments show that an overall accuracy of up to 85.7 % and 77.4 % can be achieved for land cover and land use, respectively. The classification of land cover has a positive contribution to the classification of the land use classification.
NASA Astrophysics Data System (ADS)
Gollas, F.; Tetzlaff, R.
2007-06-01
Partielle Differentialgleichungen des Reaktions-Diffusions-Typs beschreiben Phänomene wie Musterbildung, nichtlineare Wellenausbreitung und deterministisches Chaos und werden oft zur Untersuchung komplexer Vorgänge auf den Gebieten der Biologie, Chemie und Physik herangezogen. Zellulare Nichtlineare Netzwerke (CNN) sind eine räumliche Anordnung vergleichsweise einfacher dynamischer Systeme, die eine lokale Kopplung untereinander aufweisen. Durch eine Diskretisierung der Ortsvariablen können Reaktions-Diffusions-Gleichungen häufig auf CNN mit nichtlinearen Gewichtsfunktionen abgebildet werden. Die resultierenden Reaktions-Diffusions-CNN (RD-CNN) weisen dann in ihrer Dynamik näherungsweise gleiches Verhalten wie die zugrunde gelegten Reaktions-Diffusions-Systeme auf. Werden RD-CNN zur Identifikation neuronaler Strukturen anhand von EEG-Signalen herangezogen, so besteht die Möglichkeit festzustellen, ob das gefundene Netzwerk lokale Aktivität aufweist. Die von Chua eingeführte Theorie der lokalen Aktivität Chua (1998); Dogaru und Chua (1998) liefert eine notwendige Bedingung für das Auftreten von emergentem Verhalten in zellularen Netzwerken. Änderungen in den Parametern bestimmter RD-CNN könnten auf bevorstehende epileptische Anfälle hinweisen. In diesem Beitrag steht die Identifikation neuronaler Strukturen anhand von EEG-Signalen durch Reaktions-Diffusions-Netzwerke im Vordergrund der dargestellten Untersuchungen. In der Ergebnisdiskussion wird insbesondere auch die Frage nach einer geeigneten Netzwerkstruktur mit minimaler Komplexität behandelt.
NASA Astrophysics Data System (ADS)
Ravnik, Domen; Jerman, Tim; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga
2018-03-01
Performance of a convolutional neural network (CNN) based white-matter lesion segmentation in magnetic resonance (MR) brain images was evaluated under various conditions involving different levels of image preprocessing and augmentation applied and different compositions of the training dataset. On images of sixty multiple sclerosis patients, half acquired on one and half on another scanner of different vendor, we first created a highly accurate multi-rater consensus based lesion segmentations, which were used in several experiments to evaluate the CNN segmentation result. First, the CNN was trained and tested without preprocessing the images and by using various combinations of preprocessing techniques, namely histogram-based intensity standardization, normalization by whitening, and train dataset augmentation by flipping the images across the midsagittal plane. Then, the CNN was trained and tested on images of the same, different or interleaved scanner datasets using a cross-validation approach. The results indicate that image preprocessing has little impact on performance in a same-scanner situation, while between-scanner performance benefits most from intensity standardization and normalization, but also further by incorporating heterogeneous multi-scanner datasets in the training phase. Under such conditions the between-scanner performance of the CNN approaches that of the ideal situation, when the CNN is trained and tested on the same scanner dataset.
Wallis, Thomas S A; Funke, Christina M; Ecker, Alexander S; Gatys, Leon A; Wichmann, Felix A; Bethge, Matthias
2017-10-01
Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.
NASA Astrophysics Data System (ADS)
Aydogan, D.
2007-04-01
An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more powerful than the classical methods.
Hoo-Chang, Shin; Roth, Holger R.; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel
2016-01-01
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet) and the revival of deep convolutional neural networks (CNN). CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models (supervised) pre-trained from natural image dataset to medical image tasks (although domain transfer between two medical image datasets is also possible). In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computeraided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks. PMID:26886976
CNN-BLPred: a Convolutional neural network based predictor for β-Lactamases (BL) and their classes.
White, Clarence; Ismail, Hamid D; Saigo, Hiroto; Kc, Dukka B
2017-12-28
The β-Lactamase (BL) enzyme family is an important class of enzymes that plays a key role in bacterial resistance to antibiotics. As the newly identified number of BL enzymes is increasing daily, it is imperative to develop a computational tool to classify the newly identified BL enzymes into one of its classes. There are two types of classification of BL enzymes: Molecular Classification and Functional Classification. Existing computational methods only address Molecular Classification and the performance of these existing methods is unsatisfactory. We addressed the unsatisfactory performance of the existing methods by implementing a Deep Learning approach called Convolutional Neural Network (CNN). We developed CNN-BLPred, an approach for the classification of BL proteins. The CNN-BLPred uses Gradient Boosted Feature Selection (GBFS) in order to select the ideal feature set for each BL classification. Based on the rigorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets, CCN-BLPred performed better than the other existing algorithms. Compared with other architectures of CNN, Recurrent Neural Network, and Random Forest, the simple CNN architecture with only one convolutional layer performs the best. After feature extraction, we were able to remove ~95% of the 10,912 features using Gradient Boosted Trees. During 10-fold cross validation, we increased the accuracy of the classic BL predictions by 7%. We also increased the accuracy of Class A, Class B, Class C, and Class D performance by an average of 25.64%. The independent test results followed a similar trend. We implemented a deep learning algorithm known as Convolutional Neural Network (CNN) to develop a classifier for BL classification. Combined with feature selection on an exhaustive feature set and using balancing method such as Random Oversampling (ROS), Random Undersampling (RUS) and Synthetic Minority Oversampling Technique (SMOTE), CNN-BLPred performs significantly better than existing algorithms for BL classification.
Baratta, Walter; Ballico, Maurizio; Esposito, Gennaro; Rigo, Pierluigi
2008-01-01
The reaction of [RuCl(CNN)(dppb)] (1; HCNN=6-(4-methylphenyl)-2-pyridylmethylamine) with NaOiPr in 2-propanol/C6D6 affords the alcohol adduct alkoxide [Ru(OiPr)(CNN)(dppb)].n iPrOH (5), containing the Ru-NH2 linkage. The alkoxide [Ru(OiPr)(CNN)(dppb)] (4) is formed by treatment of the hydride [Ru(H)(CNN)(dppb)] (2) with acetone in C6D6. Complex 5 in 2-propanol/C6D6 equilibrates quickly with hydride 2 and acetone with an exchange rate of (5.4+/-0.2) s(-1) at 25 degrees C, higher than that found between 4 and 2 ((2.9+/-0.4) s(-1)). This fast process, involving a beta-hydrogen elimination versus ketone insertion into the Ru-H bond, occurs within a hydrogen-bonding network favored by the Ru-NH2 motif. The cationic alcohol complex [Ru(CNN)(dppb)(iPrOH)](BAr(f)4) (6; Ar(f)=3,5-C6H3(CF3)2), obtained from 1, Na[BAr(f)4], and 2-propanol, reacts with NaOiPr to afford 5. Complex 5 reacts with either 4,4'-difluorobenzophenone through hydride 2 or with 4,4'-difluorobenzhydrol through protonation, affording the alkoxide [Ru(OCH(4-C6H4F)2)(CNN)(dppb)] (7) in 90 and 85 % yield of the isolated product. The chiral CNN-ruthenium compound [RuCl(CNN)((S,S)-Skewphos)] (8), obtained by the reaction of [RuCl2(PPh3)3] with (S,S)-Skewphos and orthometalation of HCNN in the presence of NEt3, is a highly active catalyst for the enantioselective transfer hydrogenation of methylaryl ketones (turnover frequencies (TOFs) of up to 1.4 x 10(6) h(-1) at reflux were obtained) with up to 89% ee. Also the ketone CF3CO(4-C6H4F), containing the strong electron-withdrawing CF3 group, is reduced to the R alcohol with 64% ee and a TOF of 1.5 x 10(4) h(-1). The chiral alkoxide [Ru(OiPr)(CNN)((S,S)-Skewphos)]n iPrOH (9), obtained from 8 and NaOiPr in the presence of 2-propanol, reacts with CF3CO(4-C6H4F) to afford a mixture of the diastereomer alkoxides [Ru(OCH(CF3)(4-C6H4F))(CNN)((S,S)-Skewphos)] (10/11; 74% yield) with 67% de. This value is very close to the enantiomeric excess of the alcohol (R)-CF3CH(OH)(4-C6H4F) formed in catalysis, thus suggesting that diastereoisomeric alkoxides with the Ru-NH2 linkage are key species in the catalytic asymmetric transfer hydrogenation reaction.
Two Novel Glycoside Hydrolases Responsible for the Catabolism of Cyclobis-(1→6)-α-nigerosyl*
Tagami, Takayoshi; Miyano, Eri; Sadahiro, Juri; Okuyama, Masayuki; Iwasaki, Tomohito; Kimura, Atsuo
2016-01-01
The actinobacterium Kribbella flavida NBRC 14399T produces cyclobis-(1→6)-α-nigerosyl (CNN), a cyclic glucotetraose with alternate α-(1→6)- and α-(1→3)-glucosidic linkages, from starch in the culture medium. We identified gene clusters associated with the production and intracellular catabolism of CNN in the K. flavida genome. One cluster encodes 6-α-glucosyltransferase and 3-α-isomaltosyltransferase, which are known to coproduce CNN from starch. The other cluster contains four genes annotated as a transcriptional regulator, sugar transporter, glycoside hydrolase family (GH) 31 protein (Kfla1895), and GH15 protein (Kfla1896). Kfla1895 hydrolyzed the α-(1→3)-glucosidic linkages of CNN and produced isomaltose via a possible linear tetrasaccharide. The initial rate of hydrolysis of CNN (11.6 s−1) was much higher than that of panose (0.242 s−1), and hydrolysis of isomaltotriose and nigerose was extremely low. Because Kfla1895 has a strong preference for the α-(1→3)-isomaltosyl moiety and effectively hydrolyzes the α-(1→3)-glucosidic linkage, it should be termed 1,3-α-isomaltosidase. Kfla1896 effectively hydrolyzed isomaltose with liberation of β-glucose, but displayed low or no activity toward CNN and the general GH15 enzyme substrates such as maltose, soluble starch, or dextran. The kcat/Km for isomaltose (4.81 ± 0.18 s−1 mm−1) was 6.9- and 19-fold higher than those for panose and isomaltotriose, respectively. These results indicate that Kfla1896 is a new GH15 enzyme with high substrate specificity for isomaltose, suggesting the enzyme should be designated an isomaltose glucohydrolase. This is the first report to identify a starch-utilization pathway that proceeds via CNN. PMID:27302067
Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M
2016-05-01
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-04-15
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Li, Jin; Zhang, Min; Wang, Danshi; Wu, Shaojun; Zhan, Yueying
2018-04-16
A novel joint atmospheric turbulence (AT) detection and adaptive demodulation technique based on convolutional neural network (CNN) are proposed for the OAM-based free-space optical (FSO) communication. The AT detecting accuracy (ATDA) and the adaptive demodulating accuracy (ADA) of the 4-OAM, 8-OAM, 16-OAM FSO communication systems over computer-simulated 1000-m turbulent channels with 4, 6, 10 kinds of classic ATs are investigated, respectively. Compared to previous approaches using the self-organizing mapping (SOM), deep neural network (DNN) and other CNNs, the proposed CNN achieves the highest ATDA and ADA due to the advanced multi-layer representation learning without feature extractors designed carefully by numerous experts. For the AT detection, the ATDA of CNN is near 95.2% for 6 kinds of typical ATs, in cases of both weak and strong ATs. For the adaptive demodulation of optical vortices (OV) carrying OAM modes, the ADA of CNN is about 99.8% for the 8-OAM system over the computer-simulated 1000-m free-space strong turbulent link. In addition, the effects of image resolution, iteration number, activation functions and the structure of the CNN are also studied comprehensively. The proposed technique has the potential to be embedded in charge-coupled device (CCD) cameras deployed at the receiver to improve the reliability and flexibility for the OAM-FSO communication.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-01-01
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505
NASA Astrophysics Data System (ADS)
Gong, Maoguo; Yang, Hailun; Zhang, Puzhao
2017-07-01
Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.
Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening †
Yoon, Sang Min
2018-01-01
Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches. PMID:29614767
Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening.
Cho, Heeryon; Yoon, Sang Min
2018-04-01
Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostko, Oleg; Zhou, Jia; Sun, Bian Jian
2010-06-10
Results from single photon vacuum ultraviolet photoionization of astrophysically relevant CnN clusters, n = 4 - 12, in the photon energy range of 8.0 eV to 12.8 eV are presented. The experimental photoionization efficiency curves, combined with electronic structure calculations, provide improved ionization energies of the CnN species. A search through numerous nitrogen-terminated CnN isomers for n=4-9 indicates that the linear isomer has the lowest energy, and therefore should be the most abundant isomer in the molecular beam. Comparison with calculated results also shed light on the energetics of the linear CnN clusters, particularly in the trends of the even-carbonmore » and the odd-carbon series. These results can help guide the search of potential astronomical observations of these neutral molecules together with their cations in highly ionized regions or regions with a high UV/VUV photon flux (ranging from the visible to VUV with flux maxima in the Lyman- region) in the interstellar medium.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostko, Oleg; Zhou, Jia; Sun, Bian Jian
2010-03-02
Results from single photon vacuum ultraviolet photoionization of astrophysically relevant CnN clusters, n = 4 - 12, in the photon energy range of 8.0 eV to 12.8 eV are presented. The experimental photoionization efficiency curves, combined with electronic structure calculations, provide improved ionization energies of the CnN species. A search through numerous nitrogen-terminated CnN isomers for n=4-9 indicates that the linear isomer has the lowest energy, and therefore should be the most abundant isomer in the molecular beam. Comparison with calculated results also shed light on the energetics of the linear CnN clusters, particularly in the trends of the even-carbonmore » and the odd-carbon series. These results can help guide the search of potential astronomical observations of these neutral molecules together with their cations in highly ionized regions or regions with a high UV/VUV photon flux (ranging from the visible to VUV with flux maxima in the Lyman-a region) in the interstellar medium.« less
Improving CNN Performance Accuracies With Min-Max Objective.
Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning
2017-06-09
We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.
Continuous Chinese sign language recognition with CNN-LSTM
NASA Astrophysics Data System (ADS)
Yang, Su; Zhu, Qing
2017-07-01
The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.
PSNet: prostate segmentation on MRI based on a convolutional neural network.
Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei
2018-04-01
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Xiao Jia; Meng, Max Q-H
2017-07-01
Gastrointestinal (GI) bleeding detection plays an essential role in wireless capsule endoscopy (WCE) examination. In this paper, we present a new approach for WCE bleeding detection that combines handcrafted (HC) features and convolutional neural network (CNN) features. Compared with our previous work, a smaller-scale CNN architecture is constructed to lower the computational cost. In experiments, we show that the proposed strategy is highly capable when training data is limited, and yields comparable or better results than the latest methods.
2018-03-01
Update. 187 Rishi Iyengar, “Apple Is Removing VPN Apps that Allow Users to Skirt China’s Great Firewall,” CNN Money , July 29, 2017, http...On iCloud,” CNN Money , February 22, 2016, http://money.cnn.com/2016/02/22/technology/apple-privacy-icloud/index.html. 213 District Attorney, New...it now stands, targets of investigation can communicate surreptitiously on various platforms free from detection.236 James Comey, the former
A Simple Universal Turing Machine for the Game of Life Turing Machine
NASA Astrophysics Data System (ADS)
Rendell, Paul
In this chapter we present a simple universal Turing machine which is small enough to fit into the design limits of the Turing machine build in Conway's Game of Life by the author. That limit is 8 symbols and 16 states. By way of comparison we also describe one of the smallest known universal Turing machines due to Rogozhin which has 6 symbols and 4 states.
Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug
2018-04-30
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.
Zou, An-Min; Dev Kumar, Krishna; Hou, Zeng-Guang
2010-09-01
This paper investigates the problem of output feedback attitude control of an uncertain spacecraft. Two robust adaptive output feedback controllers based on Chebyshev neural networks (CNN) termed adaptive neural networks (NN) controller-I and adaptive NN controller-II are proposed for the attitude tracking control of spacecraft. The four-parameter representations (quaternion) are employed to describe the spacecraft attitude for global representation without singularities. The nonlinear reduced-order observer is used to estimate the derivative of the spacecraft output, and the CNN is introduced to further improve the control performance through approximating the spacecraft attitude motion. The implementation of the basis functions of the CNN used in the proposed controllers depends only on the desired signals, and the smooth robust compensator using the hyperbolic tangent function is employed to counteract the CNN approximation errors and external disturbances. The adaptive NN controller-II can efficiently avoid the over-estimation problem (i.e., the bound of the CNNs output is much larger than that of the approximated unknown function, and hence, the control input may be very large) existing in the adaptive NN controller-I. Both adaptive output feedback controllers using CNN can guarantee that all signals in the resulting closed-loop system are uniformly ultimately bounded. For performance comparisons, the standard adaptive controller using the linear parameterization of spacecraft attitude motion is also developed. Simulation studies are presented to show the advantages of the proposed CNN-based output feedback approach over the standard adaptive output feedback approach.
Probing the Boundaries of Orthology: The Unanticipated Rapid Evolution of Drosophila centrosomin
Eisman, Robert C.; Kaufman, Thomas C.
2013-01-01
The rapid evolution of essential developmental genes and their protein products is both intriguing and problematic. The rapid evolution of gene products with simple protein folds and a lack of well-characterized functional domains typically result in a low discovery rate of orthologous genes. Additionally, in the absence of orthologs it is difficult to study the processes and mechanisms underlying rapid evolution. In this study, we have investigated the rapid evolution of centrosomin (cnn), an essential gene encoding centrosomal protein isoforms required during syncytial development in Drosophila melanogaster. Until recently the rapid divergence of cnn made identification of orthologs difficult and questionable because Cnn violates many of the assumptions underlying models for protein evolution. To overcome these limitations, we have identified a group of insect orthologs and present conserved features likely to be required for the functions attributed to cnn in D. melanogaster. We also show that the rapid divergence of Cnn isoforms is apparently due to frequent coding sequence indels and an accelerated rate of intronic additions and eliminations. These changes appear to be buffered by multi-exon and multi-reading frame maximum potential ORFs, simple protein folds, and the splicing machinery. These buffering features also occur in other genes in Drosophila and may help prevent potentially deleterious mutations due to indels in genes with large coding exons and exon-dense regions separated by small introns. This work promises to be useful for future investigations of cnn and potentially other rapidly evolving genes and proteins. PMID:23749319
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; Moore, Kathleen; Liu, Hong; Zheng, Bin
2017-03-01
Abdominal obesity is strongly associated with a number of diseases and accurately assessment of subtypes of adipose tissue volume plays a significant role in predicting disease risk, diagnosis and prognosis. The objective of this study is to develop and evaluate a new computer-aided detection (CAD) scheme based on deep learning models to automatically segment subcutaneous fat areas (SFA) and visceral (VFA) fat areas depicting on CT images. A dataset involving CT images from 40 patients were retrospectively collected and equally divided into two independent groups (i.e. training and testing group). The new CAD scheme consisted of two sequential convolutional neural networks (CNNs) namely, Selection-CNN and Segmentation-CNN. Selection-CNN was trained using 2,240 CT slices to automatically select CT slices belonging to abdomen areas and SegmentationCNN was trained using 84,000 fat-pixel patches to classify fat-pixels as belonging to SFA or VFA. Then, data from the testing group was used to evaluate the performance of the optimized CAD scheme. Comparing to manually labelled results, the classification accuracy of CT slices selection generated by Selection-CNN yielded 95.8%, while the accuracy of fat pixel segmentation using Segmentation-CNN yielded 96.8%. Therefore, this study demonstrated the feasibility of using deep learning based CAD scheme to recognize human abdominal section from CT scans and segment SFA and VFA from CT slices with high agreement compared with subjective segmentation results.
Jiang, Jiewei; Liu, Xiyang; Zhang, Kai; Long, Erping; Wang, Liming; Li, Wangting; Liu, Lin; Wang, Shuai; Zhu, Mingmin; Cui, Jiangtao; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Wang, Jinghui; Lin, Haotian
2017-11-21
Ocular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial. In this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient. Qualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method. Our study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application.
3D multi-view convolutional neural networks for lung nodule classification
Kang, Guixia; Hou, Beibei; Zhang, Ningbo
2017-01-01
The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492
Convolutional neural network architectures for predicting DNA–protein binding
Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.
2016-01-01
Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608
Hirasawa, Toshiaki; Aoyama, Kazuharu; Tanimoto, Tetsuya; Ishihara, Soichiro; Shichijo, Satoki; Ozawa, Tsuyoshi; Ohnishi, Tatsuya; Fujishiro, Mitsuhiro; Matsuo, Keigo; Fujisaki, Junko; Tada, Tomohiro
2018-07-01
Image recognition using artificial intelligence with deep learning through convolutional neural networks (CNNs) has dramatically improved and been increasingly applied to medical fields for diagnostic imaging. We developed a CNN that can automatically detect gastric cancer in endoscopic images. A CNN-based diagnostic system was constructed based on Single Shot MultiBox Detector architecture and trained using 13,584 endoscopic images of gastric cancer. To evaluate the diagnostic accuracy, an independent test set of 2296 stomach images collected from 69 consecutive patients with 77 gastric cancer lesions was applied to the constructed CNN. The CNN required 47 s to analyze 2296 test images. The CNN correctly diagnosed 71 of 77 gastric cancer lesions with an overall sensitivity of 92.2%, and 161 non-cancerous lesions were detected as gastric cancer, resulting in a positive predictive value of 30.6%. Seventy of the 71 lesions (98.6%) with a diameter of 6 mm or more as well as all invasive cancers were correctly detected. All missed lesions were superficially depressed and differentiated-type intramucosal cancers that were difficult to distinguish from gastritis even for experienced endoscopists. Nearly half of the false-positive lesions were gastritis with changes in color tone or an irregular mucosal surface. The constructed CNN system for detecting gastric cancer could process numerous stored endoscopic images in a very short time with a clinically relevant diagnostic ability. It may be well applicable to daily clinical practice to reduce the burden of endoscopists.
NASA Astrophysics Data System (ADS)
Marchitto, T. M., Jr.; Mitra, R.; Zhong, B.; Ge, Q.; Kanakiya, B.; Lobaton, E.
2017-12-01
Identification and picking of foraminifera from sediment samples is often a laborious and repetitive task. Previous attempts to automate this process have met with limited success, but we show that recent advances in machine learning can be brought to bear on the problem. As a `proof of concept' we have developed a system that is capable of recognizing six species of extant planktonic foraminifera that are commonly used in paleoceanographic studies. Our pipeline begins with digital photographs taken under 16 different illuminations using an LED ring, which are then fused into a single 3D image. Labeled image sets were used to train various types of image classification algorithms, and performance on unlabeled image sets was measured in terms of precision (whether IDs are correct) and recall (what fraction of the target species are found). We find that Convolutional Neural Network (CNN) approaches achieve precision and recall values between 80 and 90%, which is similar precision and better recall than human expert performance using the same type of photographs. We have also trained a CNN to segment the 3D images into individual chambers and apertures, which can not only improve identification performance but also automate the measurement of foraminifera for morphometric studies. Given that there are only 35 species of extant planktonic foraminifera larger than 150 μm, we suggest that a fully automated characterization of this assemblage is attainable. This is the first step toward the realization of a foram picking robot.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palazzo, S.; Vagliasindi, G.; Arena, P.
2010-08-15
In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested inmore » an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.« less
Kainz, Philipp; Pfeiffer, Michael; Urschler, Martin
2017-01-01
Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.
Kainz, Philipp; Pfeiffer, Michael
2017-01-01
Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses. PMID:29018612
2D image classification for 3D anatomy localization: employing deep convolutional neural networks
NASA Astrophysics Data System (ADS)
de Vos, Bob D.; Wolterink, Jelmer M.; de Jong, Pim A.; Viergever, Max A.; Išgum, Ivana
2016-03-01
Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D. In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs -- heart, aortic arch, and descending aorta. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it. Classification performance of each CNN, expressed in area under the receiver operating characteristic curve, was >=0.988. Additionally, the performance of ROI localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible.
NASA Astrophysics Data System (ADS)
Xu, Z.; Guan, K.; Peng, B.; Casler, N. P.; Wang, S. W.
2017-12-01
Landscape has complex three-dimensional features. These 3D features are difficult to extract using conventional methods. Small-footprint LiDAR provides an ideal way for capturing these features. Existing approaches, however, have been relegated to raster or metric-based (two-dimensional) feature extraction from the upper or bottom layer, and thus are not suitable for resolving morphological and intensity features that could be important to fine-scale land cover mapping. Therefore, this research combines airborne LiDAR and multi-temporal Landsat imagery to classify land cover types of Williamson County, Illinois that has diverse and mixed landscape features. Specifically, we applied a 3D convolutional neural network (CNN) method to extract features from LiDAR point clouds by (1) creating occupancy grid, intensity grid at 1-meter resolution, and then (2) normalizing and incorporating data into a 3D CNN feature extractor for many epochs of learning. The learned features (e.g., morphological features, intensity features, etc) were combined with multi-temporal spectral data to enhance the performance of land cover classification based on a Support Vector Machine classifier. We used photo interpretation for training and testing data generation. The classification results show that our approach outperforms traditional methods using LiDAR derived feature maps, and promises to serve as an effective methodology for creating high-quality land cover maps through fusion of complementary types of remote sensing data.
Automatic Organ Segmentation for CT Scans Based on Super-Pixel and Convolutional Neural Networks.
Liu, Xiaoming; Guo, Shuxu; Yang, Bingtao; Ma, Shuzhi; Zhang, Huimao; Li, Jing; Sun, Changjian; Jin, Lanyi; Li, Xueyan; Yang, Qi; Fu, Yu
2018-04-20
Accurate segmentation of specific organ from computed tomography (CT) scans is a basic and crucial task for accurate diagnosis and treatment. To avoid time-consuming manual optimization and to help physicians distinguish diseases, an automatic organ segmentation framework is presented. The framework utilized convolution neural networks (CNN) to classify pixels. To reduce the redundant inputs, the simple linear iterative clustering (SLIC) of super-pixels and the support vector machine (SVM) classifier are introduced. To establish the perfect boundary of organs in one-pixel-level, the pixels need to be classified step-by-step. First, the SLIC is used to cut an image into grids and extract respective digital signatures. Next, the signature is classified by the SVM, and the rough edges are acquired. Finally, a precise boundary is obtained by the CNN, which is based on patches around each pixel-point. The framework is applied to abdominal CT scans of livers and high-resolution computed tomography (HRCT) scans of lungs. The experimental CT scans are derived from two public datasets (Sliver 07 and a Chinese local dataset). Experimental results show that the proposed method can precisely and efficiently detect the organs. This method consumes 38 s/slice for liver segmentation. The Dice coefficient of the liver segmentation results reaches to 97.43%. For lung segmentation, the Dice coefficient is 97.93%. This finding demonstrates that the proposed framework is a favorable method for lung segmentation of HRCT scans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, A. L.; Biedron, S. G.; Milton, S. V.
At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operation. Here, we show results of a first step toward implementing such a controller: our trained CNN can predict multiple simulated downstream beam parameters at the Fermilab Accelerator Science andmore » Technology (FAST) facility's low energy beamline using simulated virtual cathode laser images, gun phases, and solenoid strengths.« less
Han, Seung Seog; Park, Gyeong Hun; Lim, Woohyung; Kim, Myoung Shin; Na, Jung Im; Park, Ilwoo; Chang, Sung Eun
2018-01-01
Although there have been reports of the successful diagnosis of skin disorders using deep learning, unrealistically large clinical image datasets are required for artificial intelligence (AI) training. We created datasets of standardized nail images using a region-based convolutional neural network (R-CNN) trained to distinguish the nail from the background. We used R-CNN to generate training datasets of 49,567 images, which we then used to fine-tune the ResNet-152 and VGG-19 models. The validation datasets comprised 100 and 194 images from Inje University (B1 and B2 datasets, respectively), 125 images from Hallym University (C dataset), and 939 images from Seoul National University (D dataset). The AI (ensemble model; ResNet-152 + VGG-19 + feedforward neural networks) results showed test sensitivity/specificity/ area under the curve values of (96.0 / 94.7 / 0.98), (82.7 / 96.7 / 0.95), (92.3 / 79.3 / 0.93), (87.7 / 69.3 / 0.82) for the B1, B2, C, and D datasets. With a combination of the B1 and C datasets, the AI Youden index was significantly (p = 0.01) higher than that of 42 dermatologists doing the same assessment manually. For B1+C and B2+ D dataset combinations, almost none of the dermatologists performed as well as the AI. By training with a dataset comprising 49,567 images, we achieved a diagnostic accuracy for onychomycosis using deep learning that was superior to that of most of the dermatologists who participated in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp; Fujita, Hiroshi; Yamamuro, Osamu
Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using anmore » active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.« less
Blanc-Durand, Paul; Van Der Gucht, Axel; Schaefer, Niklaus; Itti, Emmanuel; Prior, John O
2018-01-01
Amino-acids positron emission tomography (PET) is increasingly used in the diagnostic workup of patients with gliomas, including differential diagnosis, evaluation of tumor extension, treatment planning and follow-up. Recently, progresses of computer vision and machine learning have been translated for medical imaging. Aim was to demonstrate the feasibility of an automated 18F-fluoro-ethyl-tyrosine (18F-FET) PET lesion detection and segmentation relying on a full 3D U-Net Convolutional Neural Network (CNN). All dynamic 18F-FET PET brain image volumes were temporally realigned to the first dynamic acquisition, coregistered and spatially normalized onto the Montreal Neurological Institute template. Ground truth segmentations were obtained using manual delineation and thresholding (1.3 x background). The volumetric CNN was implemented based on a modified Keras implementation of a U-Net library with 3 layers for the encoding and decoding paths. Dice similarity coefficient (DSC) was used as an accuracy measure of segmentation. Thirty-seven patients were included (26 [70%] in the training set and 11 [30%] in the validation set). All 11 lesions were accurately detected with no false positive, resulting in a sensitivity and a specificity for the detection at the tumor level of 100%. After 150 epochs, DSC reached 0.7924 in the training set and 0.7911 in the validation set. After morphological dilatation and fixed thresholding of the predicted U-Net mask a substantial improvement of the DSC to 0.8231 (+ 4.1%) was noted. At the voxel level, this segmentation led to a 0.88 sensitivity [95% CI, 87.1 to, 88.2%] a 0.99 specificity [99.9 to 99.9%], a 0.78 positive predictive value: [76.9 to 78.3%], and a 0.99 negative predictive value [99.9 to 99.9%]. With relatively high performance, it was proposed the first full 3D automated procedure for segmentation of 18F-FET PET brain images of patients with different gliomas using a U-Net CNN architecture.
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.
Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei
2017-07-01
The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
A CNN Regression Approach for Real-Time 2D/3D Registration.
Shun Miao; Wang, Z Jane; Rui Liao
2016-05-01
In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods.
3D Convolutional Neural Network for Automatic Detection of Lung Nodules in Chest CT.
Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2017-01-01
Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.
3D convolutional neural network for automatic detection of lung nodules in chest CT
NASA Astrophysics Data System (ADS)
Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2017-03-01
Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.
NASA Astrophysics Data System (ADS)
Kim, Sungho
2017-06-01
Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.
Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data.
Sun, Wenqing; Tseng, Tzu-Liang Bill; Zhang, Jianying; Qian, Wei
2017-04-01
In this study we developed a graph based semi-supervised learning (SSL) scheme using deep convolutional neural network (CNN) for breast cancer diagnosis. CNN usually needs a large amount of labeled data for training and fine tuning the parameters, and our proposed scheme only requires a small portion of labeled data in training set. Four modules were included in the diagnosis system: data weighing, feature selection, dividing co-training data labeling, and CNN. 3158 region of interests (ROIs) with each containing a mass extracted from 1874 pairs of mammogram images were used for this study. Among them 100 ROIs were treated as labeled data while the rest were treated as unlabeled. The area under the curve (AUC) observed in our study was 0.8818, and the accuracy of CNN is 0.8243 using the mixed labeled and unlabeled data. Copyright © 2016. Published by Elsevier Ltd.
Multiscale deep features learning for land-use scene recognition
NASA Astrophysics Data System (ADS)
Yuan, Baohua; Li, Shijin; Li, Ning
2018-01-01
The features extracted from deep convolutional neural networks (CNNs) have shown their promise as generic descriptors for land-use scene recognition. However, most of the work directly adopts the deep features for the classification of remote sensing images, and does not encode the deep features for improving their discriminative power, which can affect the performance of deep feature representations. To address this issue, we propose an effective framework, LASC-CNN, obtained by locality-constrained affine subspace coding (LASC) pooling of a CNN filter bank. LASC-CNN obtains more discriminative deep features than directly extracted from CNNs. Furthermore, LASC-CNN builds on the top convolutional layers of CNNs, which can incorporate multiscale information and regions of arbitrary resolution and sizes. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods.
Using convolutional neural networks to explore the microbiome.
Reiman, Derek; Metwally, Ahmed; Yang Dai
2017-07-01
The microbiome has been shown to have an impact on the development of various diseases in the host. Being able to make an accurate prediction of the phenotype of a genomic sample based on its microbial taxonomic abundance profile is an important problem for personalized medicine. In this paper, we examine the potential of using a deep learning framework, a convolutional neural network (CNN), for such a prediction. To facilitate the CNN learning, we explore the structure of abundance profiles by creating the phylogenetic tree and by designing a scheme to embed the tree to a matrix that retains the spatial relationship of nodes in the tree and their quantitative characteristics. The proposed CNN framework is highly accurate, achieving a 99.47% of accuracy based on the evaluation on a dataset 1967 samples of three phenotypes. Our result demonstrated the feasibility and promising aspect of CNN in the classification of sample phenotype.
Deep learning classifier with optical coherence tomography images for early dental caries detection
NASA Astrophysics Data System (ADS)
Karimian, Nima; Salehi, Hassan S.; Mahdian, Mina; Alnajjar, Hisham; Tadinada, Aditya
2018-02-01
Dental caries is a microbial disease that results in localized dissolution of the mineral content of dental tissue. Despite considerable decline in the incidence of dental caries, it remains a major health problem in many societies. Early detection of incipient lesions at initial stages of demineralization can result in the implementation of non-surgical preventive approaches to reverse the demineralization process. In this paper, we present a novel approach combining deep convolutional neural networks (CNN) and optical coherence tomography (OCT) imaging modality for classification of human oral tissues to detect early dental caries. OCT images of oral tissues with various densities were input to a CNN classifier to determine variations in tissue densities resembling the demineralization process. The CNN automatically learns a hierarchy of increasingly complex features and a related classifier directly from training data sets. The initial CNN layer parameters were randomly selected. The training set is split into minibatches, with 10 OCT images per batch. Given a batch of training patches, the CNN employs two convolutional and pooling layers to extract features and then classify each patch based on the probabilities from the SoftMax classification layer (output-layer). Afterward, the CNN calculates the error between the classification result and the reference label, and then utilizes the backpropagation process to fine-tune all the layer parameters to minimize this error using batch gradient descent algorithm. We validated our proposed technique on ex-vivo OCT images of human oral tissues (enamel, cortical-bone, trabecular-bone, muscular-tissue, and fatty-tissue), which attested to effectiveness of our proposed method.
Park, Hanla; Papadaki, Angeliki
2016-01-01
Vending machine use has been associated with low dietary quality among children but there is limited evidence on its role in food habits of University students. We aimed to examine the nutritional value of foods sold in vending machines in a UK University and conduct formative research to investigate differences in food intake and body weight by vending machine use among 137 University students. The nutrient content of snacks and beverages available at nine campus vending machines was assessed by direct observation in May 2014. Participants (mean age 22.5 years; 54% males) subsequently completed a self-administered questionnaire to assess vending machine behaviours and food intake. Self-reported weight and height were collected. Vending machine snacks were generally high in sugar, fat and saturated fat, whereas most beverages were high in sugar. Seventy three participants (53.3%) used vending machines more than once per week and 82.2% (n 60) of vending machine users used them to snack between meals. Vending machine accessibility was positively correlated with vending machine use (r = 0.209, P = 0.015). Vending machine users, compared to non-users, reported a significantly higher weekly consumption of savoury snacks (5.2 vs. 2.8, P = 0.014), fruit juice (6.5 vs. 4.3, P = 0.035), soft drinks (5.1 vs. 1.9, P = 0.006), meat products (8.3 vs. 5.6, P = 0.029) and microwave meals (2.0 vs. 1.3, P = 0.020). No between-group differences were found in body weight. Most foods available from vending machines in this UK University were of low nutritional quality. In this sample of University students, vending machine users displayed several unfavourable dietary behaviours, compared to non-users. Findings can be used to inform the development of an environmental intervention that will focus on vending machines to improve dietary behaviours in University students in the UK. Copyright © 2015 Elsevier Ltd. All rights reserved.
Classification of CT brain images based on deep learning networks.
Gao, Xiaohong W; Hui, Rui; Tian, Zengmin
2017-01-01
While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gollas, Frank; Tetzlaff, Ronald
2009-05-01
Epilepsy is the most common chronic disorder of the nervous system. Generally, epileptic seizures appear without foregoing sign or warning. The problem of detecting a possible pre-seizure state in epilepsy from EEG signals has been addressed by many authors over the past decades. Different approaches of time series analysis of brain electrical activity already are providing valuable insights into the underlying complex dynamics. But the main goal the identification of an impending epileptic seizure with a sufficient specificity and reliability, has not been achieved up to now. An algorithm for a reliable, automated prediction of epileptic seizures would enable the realization of implantable seizure warning devices, which could provide valuable information to the patient and time/event specific drug delivery or possibly a direct electrical nerve stimulation. Cellular Nonlinear Networks (CNN) are promising candidates for future seizure warning devices. CNN are characterized by local couplings of comparatively simple dynamical systems. With this property these networks are well suited to be realized as highly parallel, analog computer chips. Today available CNN hardware realizations exhibit a processing speed in the range of TeraOps combined with low power consumption. In this contribution new algorithms based on the spatio-temporal dynamics of CNN are considered in order to analyze intracranial EEG signals and thus taking into account mutual dependencies between neighboring regions of the brain. In an identification procedure Reaction-Diffusion CNN (RD-CNN) are determined for short segments of brain electrical activity, by means of a supervised parameter optimization. RD-CNN are deduced from Reaction-Diffusion Systems, which usually are applied to investigate complex phenomena like nonlinear wave propagation or pattern formation. The Local Activity Theory provides a necessary condition for emergent behavior in RD-CNN. In comparison linear spatio-temporal autoregressive filter models are considered, for a prediction of EEG signal values. Thus Signal features values for successive, short, quasi stationary segments of brain electrical activity can be obtained, with the objective of detecting distinct changes prior to impending epileptic seizures. Furthermore long term recordings gained during presurgical diagnostics in temporal lobe epilepsy are analyzed and the predictive performance of the extracted features is evaluated statistically. Therefore a Receiver Operating Characteristic analysis is considered, assessing the distinguishability between distributions of supposed preictal and interictal periods.
NASA Astrophysics Data System (ADS)
Le, Minh Hung; Chen, Jingyu; Wang, Liang; Wang, Zhiwei; Liu, Wenyu; (Tim Cheng, Kwang-Ting; Yang, Xin
2017-08-01
Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444-55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083-92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403-13, Niaf et al 2014 IEEE Trans. Image Process. 23 979-91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787-96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to ‘see’ the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying on handcrafted features.
Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.
Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D
2018-01-01
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.
SIFT Meets CNN: A Decade Survey of Instance Retrieval.
Zheng, Liang; Yang, Yi; Tian, Qi
2018-05-01
In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.
Automated EEG-based screening of depression using deep convolutional neural network.
Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adeli, Hojjat; Subha, D P
2018-07-01
In recent years, advanced neurocomputing and machine learning techniques have been used for Electroencephalogram (EEG)-based diagnosis of various neurological disorders. In this paper, a novel computer model is presented for EEG-based screening of depression using a deep neural network machine learning approach, known as Convolutional Neural Network (CNN). The proposed technique does not require a semi-manually-selected set of features to be fed into a classifier for classification. It learns automatically and adaptively from the input EEG signals to differentiate EEGs obtained from depressive and normal subjects. The model was tested using EEGs obtained from 15 normal and 15 depressed patients. The algorithm attained accuracies of 93.5% and 96.0% using EEG signals from the left and right hemisphere, respectively. It was discovered in this research that the EEG signals from the right hemisphere are more distinctive in depression than those from the left hemisphere. This discovery is consistent with recent research and revelation that the depression is associated with a hyperactive right hemisphere. An exciting extension of this research would be diagnosis of different stages and severity of depression and development of a Depression Severity Index (DSI). Copyright © 2018 Elsevier B.V. All rights reserved.
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-07-08
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods.
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-01-01
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods. PMID:28698466
CNNEDGEPOT: CNN based edge detection of 2D near surface potential field data
NASA Astrophysics Data System (ADS)
Aydogan, D.
2012-09-01
All anomalies are important in the interpretation of gravity and magnetic data because they indicate some important structural features. One of the advantages of using gravity or magnetic data for searching contacts is to be detected buried structures whose signs could not be seen on the surface. In this paper, a general view of the cellular neural network (CNN) method with a large scale nonlinear circuit is presented focusing on its image processing applications. The proposed CNN model is used consecutively in order to extract body and body edges. The algorithm is a stochastic image processing method based on close neighborhood relationship of the cells and optimization of A, B and I matrices entitled as cloning template operators. Setting up a CNN (continues time cellular neural network (CTCNN) or discrete time cellular neural network (DTCNN)) for a particular task needs a proper selection of cloning templates which determine the dynamics of the method. The proposed algorithm is used for image enhancement and edge detection. The proposed method is applied on synthetic and field data generated for edge detection of near-surface geological bodies that mask each other in various depths and dimensions. The program named as CNNEDGEPOT is a set of functions written in MATLAB software. The GUI helps the user to easily change all the required CNN model parameters. A visual evaluation of the outputs due to DTCNN and CTCNN are carried out and the results are compared with each other. These examples demonstrate that in detecting the geological features the CNN model can be used for visual interpretation of near surface gravity or magnetic anomaly maps.
Baratta, Walter; Baldino, Salvatore; Calhorda, Maria José; Costa, Paulo J; Esposito, Gennaro; Herdtweck, Eberhardt; Magnolia, Santo; Mealli, Carlo; Messaoudi, Abdelatif; Mason, Sax A; Veiros, Luis F
2014-10-13
Reaction of [RuCl(CNN)(dppb)] (1-Cl) (HCNN=2-aminomethyl-6-(4-methylphenyl)pyridine; dppb=Ph2 P(CH2 )4 PPh2 ) with NaOCH2 CF3 leads to the amine-alkoxide [Ru(CNN)(OCH2 CF3 )(dppb)] (1-OCH2 CF3 ), whose neutron diffraction study reveals a short RuO⋅⋅⋅HN bond length. Treatment of 1-Cl with NaOEt and EtOH affords the alkoxide [Ru(CNN)(OEt)(dppb)]⋅(EtOH)n (1-OEt⋅n EtOH), which equilibrates with the hydride [RuH(CNN)(dppb)] (1-H) and acetaldehyde. Compound 1-OEt⋅n EtOH reacts reversibly with H2 leading to 1-H and EtOH through dihydrogen splitting. NMR spectroscopic studies on 1-OEt⋅n EtOH and 1-H reveal hydrogen bond interactions and exchange processes. The chloride 1-Cl catalyzes the hydrogenation (5 atm of H2 ) of ketones to alcohols (turnover frequency (TOF) up to 6.5×10(4) h(-1) , 40 °C). DFT calculations were performed on the reaction of [RuH(CNN')(dmpb)] (2-H) (HCNN'=2-aminomethyl-6-(phenyl)pyridine; dmpb=Me2 P(CH2 )4 PMe2 ) with acetone and with one molecule of 2-propanol, in alcohol, with the alkoxide complex being the most stable species. In the first step, the Ru-hydride transfers one hydrogen atom to the carbon of the ketone, whereas the second hydrogen transfer from NH2 is mediated by the alcohol and leads to the key "amide" intermediate. Regeneration of the hydride complex may occur by reaction with 2-propanol or with H2 ; both pathways have low barriers and are alcohol assisted. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Bychkov, Dmitrii; Turkki, Riku; Haglund, Caj; Linder, Nina; Lundin, Johan
2016-03-01
Recent advances in computer vision enable increasingly accurate automated pattern classification. In the current study we evaluate whether a convolutional neural network (CNN) can be trained to predict disease outcome in patients with colorectal cancer based on images of tumor tissue microarray samples. We compare the prognostic accuracy of CNN features extracted from the whole, unsegmented tissue microarray spot image, with that of CNN features extracted from the epithelial and non-epithelial compartments, respectively. The prognostic accuracy of visually assessed histologic grade is used as a reference. The image data set consists of digitized hematoxylin-eosin (H and E) stained tissue microarray samples obtained from 180 patients with colorectal cancer. The patient samples represent a variety of histological grades, have data available on a series of clinicopathological variables including long-term outcome and ground truth annotations performed by experts. The CNN features extracted from images of the epithelial tissue compartment significantly predicted outcome (hazard ratio (HR) 2.08; CI95% 1.04-4.16; area under the curve (AUC) 0.66) in a test set of 60 patients, as compared to the CNN features extracted from unsegmented images (HR 1.67; CI95% 0.84-3.31, AUC 0.57) and visually assessed histologic grade (HR 1.96; CI95% 0.99-3.88, AUC 0.61). As a conclusion, a deep-learning classifier can be trained to predict outcome of colorectal cancer based on images of H and E stained tissue microarray samples and the CNN features extracted from the epithelial compartment only resulted in a prognostic discrimination comparable to that of visually determined histologic grade.
Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks.
Ma, Jinlian; Wu, Fa; Jiang, Tian'an; Zhao, Qiyu; Kong, Dexing
2017-11-01
Delineation of thyroid nodule boundaries from ultrasound images plays an important role in calculation of clinical indices and diagnosis of thyroid diseases. However, it is challenging for accurate and automatic segmentation of thyroid nodules because of their heterogeneous appearance and components similar to the background. In this study, we employ a deep convolutional neural network (CNN) to automatically segment thyroid nodules from ultrasound images. Our CNN-based method formulates a thyroid nodule segmentation problem as a patch classification task, where the relationship among patches is ignored. Specifically, the CNN used image patches from images of normal thyroids and thyroid nodules as inputs and then generated the segmentation probability maps as outputs. A multi-view strategy is used to improve the performance of the CNN-based model. Additionally, we compared the performance of our approach with that of the commonly used segmentation methods on the same dataset. The experimental results suggest that our proposed method outperforms prior methods on thyroid nodule segmentation. Moreover, the results show that the CNN-based model is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. In detail, our CNN-based model can achieve an average of the overlap metric, dice ratio, true positive rate, false positive rate, and modified Hausdorff distance as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] on overall folds, respectively. Our proposed method is fully automatic without any user interaction. Quantitative results also indicate that our method is so efficient and accurate that it can be good enough to replace the time-consuming and tedious manual segmentation approach, demonstrating the potential clinical applications.
Zhao, Yu; Ge, Fangfei; Liu, Tianming
2018-07-01
fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.
Lucas, Eliana P; Raff, Jordan W
2007-08-27
Centrosomes consist of two centrioles surrounded by an amorphous pericentriolar matrix (PCM), but it is unknown how centrioles and PCM are connected. We show that the centrioles in Drosophila embryos that lack the centrosomal protein Centrosomin (Cnn) can recruit PCM components but cannot maintain a proper attachment to the PCM. As a result, the centrioles "rocket" around in the embryo and often lose their connection to the nucleus in interphase and to the spindle poles in mitosis. This leads to severe mitotic defects in embryos and to errors in centriole segregation in somatic cells. The Cnn-related protein CDK5RAP2 is linked to microcephaly in humans, but cnn mutant brains are of normal size, and we observe only subtle defects in the asymmetric divisions of mutant neuroblasts. We conclude that Cnn maintains the proper connection between the centrioles and the PCM; this connection is required for accurate centriole segregation in somatic cells but is not essential for the asymmetric division of neuroblasts.
A molecular mechanism of mitotic centrosome assembly in Drosophila
Conduit, Paul T; Richens, Jennifer H; Wainman, Alan; Holder, James; Vicente, Catarina C; Pratt, Metta B; Dix, Carly I; Novak, Zsofia A; Dobbie, Ian M; Schermelleh, Lothar; Raff, Jordan W
2014-01-01
Centrosomes comprise a pair of centrioles surrounded by pericentriolar material (PCM). The PCM expands dramatically as cells enter mitosis, but it is unclear how this occurs. In this study, we show that the centriole protein Asl initiates the recruitment of DSpd-2 and Cnn to mother centrioles; both proteins then assemble into co-dependent scaffold-like structures that spread outwards from the mother centriole and recruit most, if not all, other PCM components. In the absence of either DSpd-2 or Cnn, mitotic PCM assembly is diminished; in the absence of both proteins, it appears to be abolished. We show that DSpd-2 helps incorporate Cnn into the PCM and that Cnn then helps maintain DSpd-2 within the PCM, creating a positive feedback loop that promotes robust PCM expansion around the mother centriole during mitosis. These observations suggest a surprisingly simple mechanism of mitotic PCM assembly in flies. DOI: http://dx.doi.org/10.7554/eLife.03399.001 PMID:25149451
Classification of time-series images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Hatami, Nima; Gavet, Yann; Debayle, Johan
2018-04-01
Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.
Vision-based posture recognition using an ensemble classifier and a vote filter
NASA Astrophysics Data System (ADS)
Ji, Peng; Wu, Changcheng; Xu, Xiaonong; Song, Aiguo; Li, Huijun
2016-10-01
Posture recognition is a very important Human-Robot Interaction (HRI) way. To segment effective posture from an image, we propose an improved region grow algorithm which combining with the Single Gauss Color Model. The experiment shows that the improved region grow algorithm can get the complete and accurate posture than traditional Single Gauss Model and region grow algorithm, and it can eliminate the similar region from the background at the same time. In the posture recognition part, and in order to improve the recognition rate, we propose a CNN ensemble classifier, and in order to reduce the misjudgments during a continuous gesture control, a vote filter is proposed and applied to the sequence of recognition results. Comparing with CNN classifier, the CNN ensemble classifier we proposed can yield a 96.27% recognition rate, which is better than that of CNN classifier, and the proposed vote filter can improve the recognition result and reduce the misjudgments during the consecutive gesture switch.
Detection of concealed cars in complex cargo X-ray imagery using Deep Learning.
Jaccard, Nicolas; Rogers, Thomas W; Morton, Edward J; Griffin, Lewis D
2017-01-01
Non-intrusive inspection systems based on X-ray radiography techniques are routinely used at transport hubs to ensure the conformity of cargo content with the supplied shipping manifest. As trade volumes increase and regulations become more stringent, manual inspection by trained operators is less and less viable due to low throughput. Machine vision techniques can assist operators in their task by automating parts of the inspection workflow. Since cars are routinely involved in trafficking, export fraud, and tax evasion schemes, they represent an attractive target for automated detection and flagging for subsequent inspection by operators. Development and evaluation of a novel method for the automated detection of cars in complex X-ray cargo imagery. X-ray cargo images from a stream-of-commerce dataset were classified using a window-based scheme. The limited number of car images was addressed by using an oversampling scheme. Different Convolutional Neural Network (CNN) architectures were compared with well-established bag of words approaches. In addition, robustness to concealment was evaluated by projection of objects into car images. CNN approaches outperformed all other methods evaluated, achieving 100% car image classification rate for a false positive rate of 1-in-454. Cars that were partially or completely obscured by other goods, a modus operandi frequently adopted by criminals, were correctly detected. We believe that this level of performance suggests that the method is suitable for deployment in the field. It is expected that the generic object detection workflow described can be extended to other object classes given the availability of suitable training data.
Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM
NASA Astrophysics Data System (ADS)
Shima, Yoshihiro
2018-04-01
Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.
Learning Deep Representations for Ground to Aerial Geolocalization (Open Access)
2015-10-15
proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over tra- ditional hand...crafted features and existing deep features learned from other large-scale databases. We show the ef- fectiveness of Where-CNN in finding matches
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.; ...
2017-05-03
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
Three-dimensional fingerprint recognition by using convolution neural network
NASA Astrophysics Data System (ADS)
Tian, Qianyu; Gao, Nan; Zhang, Zonghua
2018-01-01
With the development of science and technology and the improvement of social information, fingerprint recognition technology has become a hot research direction and been widely applied in many actual fields because of its feasibility and reliability. The traditional two-dimensional (2D) fingerprint recognition method relies on matching feature points. This method is not only time-consuming, but also lost three-dimensional (3D) information of fingerprint, with the fingerprint rotation, scaling, damage and other issues, a serious decline in robustness. To solve these problems, 3D fingerprint has been used to recognize human being. Because it is a new research field, there are still lots of challenging problems in 3D fingerprint recognition. This paper presents a new 3D fingerprint recognition method by using a convolution neural network (CNN). By combining 2D fingerprint and fingerprint depth map into CNN, and then through another CNN feature fusion, the characteristics of the fusion complete 3D fingerprint recognition after classification. This method not only can preserve 3D information of fingerprints, but also solves the problem of CNN input. Moreover, the recognition process is simpler than traditional feature point matching algorithm. 3D fingerprint recognition rate by using CNN is compared with other fingerprint recognition algorithms. The experimental results show that the proposed 3D fingerprint recognition method has good recognition rate and robustness.
Visual Cortex Inspired CNN Model for Feature Construction in Text Analysis
Fu, Hongping; Niu, Zhendong; Zhang, Chunxia; Ma, Jing; Chen, Jie
2016-01-01
Recently, biologically inspired models are gradually proposed to solve the problem in text analysis. Convolutional neural networks (CNN) are hierarchical artificial neural networks, which include a various of multilayer perceptrons. According to biological research, CNN can be improved by bringing in the attention modulation and memory processing of primate visual cortex. In this paper, we employ the above properties of primate visual cortex to improve CNN and propose a biological-mechanism-driven-feature-construction based answer recommendation method (BMFC-ARM), which is used to recommend the best answer for the corresponding given questions in community question answering. BMFC-ARM is an improved CNN with four channels respectively representing questions, answers, asker information and answerer information, and mainly contains two stages: biological mechanism driven feature construction (BMFC) and answer ranking. BMFC imitates the attention modulation property by introducing the asker information and answerer information of given questions and the similarity between them, and imitates the memory processing property through bringing in the user reputation information for answerers. Then the feature vector for answer ranking is constructed by fusing the asker-answerer similarities, answerer's reputation and the corresponding vectors of question, answer, asker, and answerer. Finally, the Softmax is used at the stage of answer ranking to get best answers by the feature vector. The experimental results of answer recommendation on the Stackexchange dataset show that BMFC-ARM exhibits better performance. PMID:27471460
Visual Cortex Inspired CNN Model for Feature Construction in Text Analysis.
Fu, Hongping; Niu, Zhendong; Zhang, Chunxia; Ma, Jing; Chen, Jie
2016-01-01
Recently, biologically inspired models are gradually proposed to solve the problem in text analysis. Convolutional neural networks (CNN) are hierarchical artificial neural networks, which include a various of multilayer perceptrons. According to biological research, CNN can be improved by bringing in the attention modulation and memory processing of primate visual cortex. In this paper, we employ the above properties of primate visual cortex to improve CNN and propose a biological-mechanism-driven-feature-construction based answer recommendation method (BMFC-ARM), which is used to recommend the best answer for the corresponding given questions in community question answering. BMFC-ARM is an improved CNN with four channels respectively representing questions, answers, asker information and answerer information, and mainly contains two stages: biological mechanism driven feature construction (BMFC) and answer ranking. BMFC imitates the attention modulation property by introducing the asker information and answerer information of given questions and the similarity between them, and imitates the memory processing property through bringing in the user reputation information for answerers. Then the feature vector for answer ranking is constructed by fusing the asker-answerer similarities, answerer's reputation and the corresponding vectors of question, answer, asker, and answerer. Finally, the Softmax is used at the stage of answer ranking to get best answers by the feature vector. The experimental results of answer recommendation on the Stackexchange dataset show that BMFC-ARM exhibits better performance.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.
Ren, Shaoqing; He, Kaiming; Girshick, Ross; Sun, Jian
2017-06-01
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.
Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong
2018-01-01
The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.
Tiled architecture of a CNN-mostly IP system
NASA Astrophysics Data System (ADS)
Spaanenburg, Lambert; Malki, Suleyman
2009-05-01
Multi-core architectures have been popularized with the advent of the IBM CELL. On a finer grain the problems in scheduling multi-cores have already existed in the tiled architectures, such as the EPIC and Da Vinci. It is not easy to evaluate the performance of a schedule on such architecture as historical data are not available. One solution is to compile algorithms for which an optimal schedule is known by analysis. A typical example is an algorithm that is already defined in terms of many collaborating simple nodes, such as a Cellular Neural Network (CNN). A simple node with a local register stack together with a 'rotating wheel' internal communication mechanism has been proposed. Though the basic CNN allows for a tiled implementation of a tiled algorithm on a tiled structure, a practical CNN system will have to disturb this regularity by the additional need for arithmetical and logical operations. Arithmetic operations are needed for instance to accommodate for low-level image processing, while logical operations are needed to fork and merge different data streams without use of the external memory. It is found that the 'rotating wheel' internal communication mechanism still handles such mechanisms without the need for global control. Overall the CNN system provides for a practical network size as implemented on a FPGA, can be easily used as embedded IP and provides a clear benchmark for a multi-core compiler.
NASA Astrophysics Data System (ADS)
Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo
2017-03-01
Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.
Cephalometric landmark detection in dental x-ray images using convolutional neural networks
NASA Astrophysics Data System (ADS)
Lee, Hansang; Park, Minseok; Kim, Junmo
2017-03-01
In dental X-ray images, an accurate detection of cephalometric landmarks plays an important role in clinical diagnosis, treatment and surgical decisions for dental problems. In this work, we propose an end-to-end deep learning system for cephalometric landmark detection in dental X-ray images, using convolutional neural networks (CNN). For detecting 19 cephalometric landmarks in dental X-ray images, we develop a detection system using CNN-based coordinate-wise regression systems. By viewing x- and y-coordinates of all landmarks as 38 independent variables, multiple CNN-based regression systems are constructed to predict the coordinate variables from input X-ray images. First, each coordinate variable is normalized by the length of either height or width of an image. For each normalized coordinate variable, a CNN-based regression system is trained on training images and corresponding coordinate variable, which is a variable to be regressed. We train 38 regression systems with the same CNN structure on coordinate variables, respectively. Finally, we compute 38 coordinate variables with these trained systems from unseen images and extract 19 landmarks by pairing the regressed coordinates. In experiments, the public database from the Grand Challenges in Dental X-ray Image Analysis in ISBI 2015 was used and the proposed system showed promising performance by successfully locating the cephalometric landmarks within considerable margins from the ground truths.
CNN Newsroom Classroom Guides, November 2001.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of November 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: economic stimulus and U.S. steps up the bombing campaign in…
CNN Newsroom Classroom Guides, September 2001.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of September 2001 provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: shark attacks ignite controversy in some Florida communities,…
CNN Newsroom Classroom Guides, August 2000.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of August 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: the GOP opens its 37th national convention in Philadelphia, outraged…
CNN Newsroom Classroom Guides. June 1998.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
CNN Newsroom is a daily 15-minute commercial-free news program specifically produced for classroom use and provided free to participating schools. These daily classroom guides present top stories, headlines, environmental news, and other current events, along with suggested class discussion topics and activities to accompany the broadcasts for one…
CNN Newsroom Classroom Guides, March 2001.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of March 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: Seattle earthquake and U.S. economy working class communities fear a…
CNN Newsroom Classroom Guides. March 1-31, 1996.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of March, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: negative campaign ads, the end of the Sarajevo siege, alternative medicine in…
CNN Newsroom Classroom Guides, April 2000.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Newtown, PA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of April 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: failure of settlement talks between Microsoft and the U.S. government,…
CNN Newsroom Classroom Guides. May 2-31, 1994.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of May provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: (1) the Palestinian Liberation Organization (PLO) and Palestine, Hawaiian…
CNN Newsroom Guides: April 3-28, 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily Cable News Network (CNN) Newsroom broadcasts for the month of April provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guide include: (1) reckless driving, hearing impairment, ancient to modern cities,…
CNN Newsroom Classroom Guides, June 2001.
ERIC Educational Resources Information Center
Turner Learning, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of June 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: Indonesian President Wahid faces impeachment (June 1); suicide bombing…
CNN Newsroom Classroom Guides. January 1-31, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of January, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: U.S. House of Representatives prepares for ethics battle, diplomatic immunity,…
CNN Newsroom Classroom Guides, April 2001.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of April 2001 provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: former Yugoslav President Slobodan Milosevic is arrested, a Chinese…
CNN Newsroom Guides. March 1-31, 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily Cable News Network (CNN) Newsroom broadcasts for the month of March provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guide include: (1) investment terminology, Republican presidential nominations, the shuttle…
CNN Newsroom Classroom Guides. March 1-31, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of March, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: monkeys cloned in Oregon, Iran suffers massive earthquake, tornados affect…
Sun, Yunqiang; Li, Xiaoyan; Sun, Hongjian
2014-07-07
Three novel [CNN]-pincer nickel(ii) complexes with NHC-amine arms were synthesized in three steps. Complex was proven to be an efficient catalyst for the Kumada coupling of aryl chlorides or aryl dichlorides under mild conditions.
CNN Newsroom Classroom Guides. September 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for September 1-30, 1999, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related news terms. Top stories include: Venezuela constitutional crisis, Panama's first female…
CNN Newsroom Classroom Guides, November 2000.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of November 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: independent U.S. oil companies struggle to survive, U.S.…
Perceptions and Use of News Media by College Students.
ERIC Educational Resources Information Center
Henke, Lucy L.
1985-01-01
This study investigated college students' use of and attitudes toward traditional and nontraditional news media, and the role of cable news network (CNN) and its integration into evolving news consumption patterns. Results indicate later college years are associated with heavier consumption. CNN viewers are heavier users of traditional media.…
CNN Newsroom Classroom Guides, August 2001.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of August 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: special series on the teenage brain, and MTV celebrates its 20th…
CNN Newsroom Classroom Guides. December 1-31, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of December, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: Japan hosts the Climate Change Conference, space shuttle is unable to deploy…
CNN Newsroom Classroom Guides. June, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of June, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: France gets a new government and Prime Minister as the Socialist Party defeats the…
CNN Newsroom Classroom Guides. August 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for August 2-31, 1999, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related news terms. Top stories include: the drought and heatwave in the northeastern United…
CNN Newsroom Classroom Guides, April 2002.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of April 2002, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: Israeli soldiers attack Yasser Arafat's headquarters in Ramallah,…
CNN Newsroom Classroom Guides, January 2001.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of January 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: George W. Bush nominates the last three vacant Cabinet posts,…
CNN Newsroom Classroom Guides, May 2001.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of May 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: President Bush will announce his plans for a missile defense system,…
CNN Newsroom Classroom Guides, November 1-30, 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of November, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: the Bosnia peace talks, hot-air balloons, salt…
CNN Newsroom Classroom Guides. October, 1998.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of October, 1998, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: scientists find trace fossil evidence of billion-year old worms, the…
CNN Newsroom Classroom Guides, June 2002.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of June 2002, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Major topics covered include: the Kashmir conflict; the Pakistan and the Kazahkstan Summit;…
CNN Newsroom Classroom Guides, May 1-31, 1996.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for the month of May, provide program rundowns, suggestions for class activities and discussion, student handouts, and lists of related news terms. Topics covered include: United States-Israel anti-terrorism accord, the comeback of baseball…
CNN Newsroom Classroom Guides. September 1998.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of September, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: the reaction of world markets to Russia's Duma rejection of Viktor…
CNN Newsroom Classroom Guides. August 1998.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These Classroom Guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of August, provide program rundowns, suggestions for class activities and discussion, links to pertinent World Wide Web sites, and lists of related news terms. Topics include: meetings over weapons inspections in Iraq could either…
CNN Newsroom Classroom Guides. August 1-31, 1994.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of August provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: (1) Haiti, exploration of Mars, Rwandan refugees, Goodwill Games, Paris…
CNN Newsroom Classroom Guides, October 2000.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Newtown, PA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of October 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: Chinese authorities detain Falun Gong protesters on Tiananmen Square…
CNN Newsroom Classroom Guides, January 2002.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of January 2002, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: tensions escalate between Pakistan and India, (January 3-4); the…
CNN Newsroom Classroom Guides, October 2001.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of October 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Stories include: Taliban update/tribal troubles, U.S. officials report progress in the…
CNN Newsroom Classroom Guides. November, 1998.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for the month of November, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: Iraq refuses to cooperate with United Nations weapons inspectors, expansion of…
CNN Newsroom Classroom Guides. March 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
CNN Newsroom is a daily 15-minute commercial-free news program specifically produced for classroom use and provided free to participating schools. These daily classroom guides present top stories, headlines, environmental news, and other current events, along with suggested class discussion topics and activities to accompany the broadcasts for one…
CNN Newsroom Classroom Guides. February 1-28, 1998.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of February, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: United States lobbies for support for possible air strike against Iraq,…
CNN Newsroom Classroom Guides. March 14-31, 1994.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of March provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: (1) Bophuthatswana, Best Quest, language immersion, Bosnia diaries, Nepal,…
CNN Newsroom Classroom Guides, May 2000.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Newtown, PA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of May 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: U.S. Government files a proposal to split up Microsoft, terrorism source…
CNN Newsroom Classroom Guides, February 2001.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of February 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: Libyan intelligence agent is convicted of the Lockerbie bombing, and…
CNN Newsroom Classroom Guides. May 1999.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Newtown, PA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of May, provide program rundowns, suggestions for class activities and discussion, links to related World Wide Web sites, and lists of related news terms. Top stories include: Reverend Jesse Jackson secures release of U.S. soldiers…
CNN Newsroom Classroom Guides. September 1-30, 1994.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of August provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: (1) truce in Northern Ireland, school censorship, scientific method, burial…
CNN Newsroom Classroom Guides. February 1-29, 1996.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) broadcasts for the month of February, 1996 provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Each daily guide includes a Black History Month biographical profile. Other topics covered…
CNN Newsroom Classroom Guides. April 1-30, 1998.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
CNN Newsroom is a daily 15-minute commercial-free news program specifically produced for classroom use and provided free to participating schools. These daily Classroom Guides are designed to accompany the broadcast, and contain activities for discussing top stories, headlines, and other current events topics; each guide also includes World Wide…
CNN Newsroom Classroom Guides, June 2000.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of June 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: President Clinton prepares to visit Germany, and federal court of…
CNN Newsroom Classroom Guides, February 2002.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of February 2002, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: Afghanistan's interim leader is making a global impression (February…
CNN Newsroom Classroom Guides. October 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of October, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: bedroom community business, freedom of expression and…
CNN Newsroom Classroom Guides. April 1-30, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of April, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Headlines include: Arab League boycott, Zaire peace talks, Russia and Belarus sign agreement,…
CNN Newsroom Classroom Guides, December 1-31, 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the first half of the month of December, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered include: President Clinton's visit to Northern Ireland,…
CNN Newsroom Classroom Guides, December 2001.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of December 2001, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: President Bush responds to the recent acts of terrorism in Israel,…
CNN Newsroom Classroom Guides, October 1-31, 1996.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of October, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: the Middle East peace summit in Washington, DC, Israel's Netanyahu and…
CNN Newsroom Classroom Guides. November 1-30, 1996.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of November, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: presidential candidates travel the United States searching for votes, FBI…
CNN Newsroom Classroom Guides. February 1999.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Newtown, PA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of February, provide program rundowns, suggestions for class activities and discussion, links to related World Wide Web sites, and lists of related news terms. Topics include: Monica Lewinsky scheduled to be deposed for the Senate,…
CNN Newsroom Classroom Guides, December 2000.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of December 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: the United States Supreme Court hears the presidential candidates'…
CNN Newsroom Classroom Guides. December 1-31, 1996.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for December 1-20, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: eighth annual World AIDS Day, protests in Belgrade, Mother Theresa's condition…
Accurate lithography simulation model based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki
2017-07-01
Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.
Static facial expression recognition with convolution neural networks
NASA Astrophysics Data System (ADS)
Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei
2018-03-01
Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.
CNN Newsroom Classroom Guides. October 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for October 1-29, 1999, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related news terms. Top stories include: nuclear accident in Japan (October 1); debate over the nuclear…
CNN Newsroom Classroom Guides, July 2001.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of July 2001 provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: Slobodan Milosevic prepares to go before the U.N. war crimes tribunal,…
CNN Newsroom Classroom Guides. January 1-31, 1996.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of January, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: teen obesity, the Yangtze River Dam and its hydroelectric…
CNN Newsroom Classroom Guides. June 1-30, 1994.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of June provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: (1) Congressman Dan Rostenkowski, D-Day, cars and Singapore, Rodney King civil…
CNN Newsroom Classroom Guides. December 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for December 1-17, 1999, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related news terms. Top stories include: World AIDS Day, World Trade Organization protests in Seattle,…
CNN Newsroom Classroom Guides. November 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for November 1-30, 1999, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related news terms. Top stories include: EgyptAir Flight 990 crash, Oslo summit, India cyclone,…
National Television News in Seven Rural Districts. Report 96-2.
ERIC Educational Resources Information Center
Nasstrom, Roy; Gierok, Anne
The implementation, delivery, and impact on students of news programs delivered to schools by Channel One and CNN-Newsroom were examined in seven rural districts in Wisconsin. Investigation covered three districts using CNN and four districts using Channel One within a three-county area. Involved administrators, teachers, and students responded to…
CNN Newsroom Classroom Guides. January 2000.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for January 3-28, 2000, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related news terms. Top stories include: issues of the Millennium, 100 hours of the Millennium, Mideast…
CNN Newsroom Classroom Guides. July 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for July 1-30, 1999, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related new terms. Top stories include: Kosovo after the strikes, and saving the Everglades (July 1-2);…
CNN Newsroom Classroom Guides. April 1-29, 1994.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of April provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: (1) peace in the Middle East, Tom Bradley, and minority superheroes (April 1);…
CNN Newsroom Classroom Guides. August, 1997.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These guides are designed to accompany CNN Newsroom, a daily 15-minute news program produced for classroom use and provided free to participating schools. Top stories include: peace talks stalled due to a suicide bombing in a Jerusalem market; inauguration of Iran's new president; UPS strike; budget agreement signed into law; news on teenage drug…
CNN Newsroom Classroom Guides. September 1-30, 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of September, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: the women's conference in China, "No Man Is an…
Human Factors Evaluation of the Hidalgo Equivital EQ-02 Physiological Status Monitoring System
2013-10-11
Destruction – Civil Support Team (WMD-CST) responding (11), and ricin letters that were intercepted en route to a member of Congress and the President...positive for ricin at Washington mail facility. CNN U.S., April 17, 2013. (http://www.cnn.com/2013/04/16/us/tainted-letter- intercepted accessed
CNN Newsroom Classroom Guides. February 2000.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for February 1-29, 2000, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related news terms. Top stories include: significance of the New Hampshire Primary, victors in the New…
CNN Newsroom Classroom Guides, March 2000.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Newtown, PA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of March, provide program rundowns, suggestions for class activities and discussion, Web links, and a list of related news terms. Top stories include: primary victories in the Bush campaign and preparations by Gore and Bradley for the…
CNN Newsroom Classroom Guides. June 1999.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These guides, designed to accompany the daily Cable News Network (CNN) Newsroom broadcasts for June 1-30, 1999, provide program rundowns, suggestions for class activities and discussion, links to relevant World Wide Web sites, and a list of related news terms. Top stories include: NATO bombings in Belgrade amid peace negotiations, people of South…
CNN Newsroom Classroom Guides. April 1999.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Newtown, PA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of April, provide program rundowns, suggestions for class activities and discussion, links to related World Wide Web sites, and lists of related news terms. Top stories include: NATO includes Belgrade in its targets, three U.S.…
CNN Newsroom Classroom Guides. March 1-31, 1998.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of March, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: United Nations (UN) and Iraq interpret their recent deal in different ways,…
CNN Newsroom Classroom Guides. May 1-31, 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of May provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guide include: (1) security systems and security at the Olympics, drawing to scale, civil war in…
TV 101: Good Broadcast Journalism for the Classroom?
ERIC Educational Resources Information Center
Haney, James M.
A study was conducted to assess the arguments that have been made in favor of and opposed to the electronic curricular supplements, Channel One and CNN Newsroom. Four types of information were analyzed: (1) corporate news releases and research results were reviewed; (2) Channel One and CNN Newsroom programs were studied for format, style, and…
CNN Newsroom Classroom Guides, July 2002.
ERIC Educational Resources Information Center
Turner Learning, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of July 2002, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Lead stories include: authorities arrest a man accused of starting the Rodeo fire in Arizona,…
CNN Newsroom Classroom Guides. October 1-31, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of October, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: immigrants illegally in the United States try to gain legal status before being…
CNN Newsroom Classroom Guides, March 2002.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of March 2002, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Lead stories include: the U.S. expands the War on Terrorism into the Republic of Georgia and…
CNN Newsroom Classroom Guides. July, 1997.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
CNN Newsroom is a daily 15-minute commercial-free news program specifically produced for classroom use and provided free of charge to participating schools. This guide is designed to accompany the program for July 1997. Top stories include the following: Britain's hand over of Hong Kong to the People's Republic of China; regulating business on the…
CNN Newsroom Classroom Guides. February 1-28, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of February, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: elections in Pakistan for a new prime minister, U.S. President Clinton unveils…
CNN Newsroom Classroom Guides, November 1-30, 1997.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of November 1997, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics include: U.S. leaders call for the use of force as Iraq refuses to permit access…
CNN Newsroom Classroom Guides, September 2000.
ERIC Educational Resources Information Center
Turner Educational Services, Inc., Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of September 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: FBI arrests a suspect in the Emulex hoax case (September 1); U.S.…
CNN Newsroom Classroom Guides. June 1-30, 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of June provide program rundowns, suggestions for class activities and discussions, student handouts, and a list of related news terms. Topics covered by the guides include: (1) amusement park physics, media resources and literacy, and the war in Bosnia…
CNN Newsroom Classroom Guides, July 2000.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides, designed to accompany the daily CNN (Cable News Network) Newsroom broadcasts for the month of July 2000, provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Top stories include: Mexican voters go to polls in a landmark election (July 3); Mexico's…
CNN Newsroom Classroom Guides. July 1-31, 1995.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of July provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: (1) British Prime Minister John Major, trade and Tijuana, sports physics, and…
CNN Newsroom Classroom Guides. July 1-29, 1994.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
These classroom guides for the daily CNN (Cable News Network) Newsroom broadcasts for the month of July provide program rundowns, suggestions for class activities and discussion, student handouts, and a list of related news terms. Topics covered by the guides include: (1) Yasser Arafat and online projects (July 1); (2) Yasser Arafat, athletes as…
NASA Astrophysics Data System (ADS)
Chen, Jingbo; Wang, Chengyi; Yue, Anzhi; Chen, Jiansheng; He, Dongxu; Zhang, Xiuyan
2017-10-01
The tremendous success of deep learning models such as convolutional neural networks (CNNs) in computer vision provides a method for similar problems in the field of remote sensing. Although research on repurposing pretrained CNN to remote sensing tasks is emerging, the scarcity of labeled samples and the complexity of remote sensing imagery still pose challenges. We developed a knowledge-guided golf course detection approach using a CNN fine-tuned on temporally augmented data. The proposed approach is a combination of knowledge-driven region proposal, data-driven detection based on CNN, and knowledge-driven postprocessing. To confront data complexity, knowledge-derived cooccurrence, composition, and area-based rules are applied sequentially to propose candidate golf regions. To confront sample scarcity, we employed data augmentation in the temporal domain, which extracts samples from multitemporal images. The augmented samples were then used to fine-tune a pretrained CNN for golf detection. Finally, commission error was further suppressed by postprocessing. Experiments conducted on GF-1 imagery prove the effectiveness of the proposed approach.
Low-Grade Glioma Segmentation Based on CNN with Fully Connected CRF
Li, Zeju; Shi, Zhifeng; Guo, Yi; Chen, Liang; Mao, Ying
2017-01-01
This work proposed a novel automatic three-dimensional (3D) magnetic resonance imaging (MRI) segmentation method which would be widely used in the clinical diagnosis of the most common and aggressive brain tumor, namely, glioma. The method combined a multipathway convolutional neural network (CNN) and fully connected conditional random field (CRF). Firstly, 3D information was introduced into the CNN which makes more accurate recognition of glioma with low contrast. Then, fully connected CRF was added as a postprocessing step which purposed more delicate delineation of glioma boundary. The method was applied to T2flair MRI images of 160 low-grade glioma patients. With 59 cases of data training and manual segmentation as the ground truth, the Dice similarity coefficient (DSC) of our method was 0.85 for the test set of 101 MRI images. The results of our method were better than those of another state-of-the-art CNN method, which gained the DSC of 0.76 for the same dataset. It proved that our method could produce better results for the segmentation of low-grade gliomas. PMID:29065666
Low-Grade Glioma Segmentation Based on CNN with Fully Connected CRF.
Li, Zeju; Wang, Yuanyuan; Yu, Jinhua; Shi, Zhifeng; Guo, Yi; Chen, Liang; Mao, Ying
2017-01-01
This work proposed a novel automatic three-dimensional (3D) magnetic resonance imaging (MRI) segmentation method which would be widely used in the clinical diagnosis of the most common and aggressive brain tumor, namely, glioma. The method combined a multipathway convolutional neural network (CNN) and fully connected conditional random field (CRF). Firstly, 3D information was introduced into the CNN which makes more accurate recognition of glioma with low contrast. Then, fully connected CRF was added as a postprocessing step which purposed more delicate delineation of glioma boundary. The method was applied to T2flair MRI images of 160 low-grade glioma patients. With 59 cases of data training and manual segmentation as the ground truth, the Dice similarity coefficient (DSC) of our method was 0.85 for the test set of 101 MRI images. The results of our method were better than those of another state-of-the-art CNN method, which gained the DSC of 0.76 for the same dataset. It proved that our method could produce better results for the segmentation of low-grade gliomas.
NASA Astrophysics Data System (ADS)
Guo, Dongwei; Wang, Zhe
2018-05-01
Convolutional neural networks (CNN) achieve great success in computer vision, it can learn hierarchical representation from raw pixels and has outstanding performance in various image recognition tasks [1]. However, CNN is easy to be fraudulent in terms of it is possible to produce images totally unrecognizable to human eyes that CNNs believe with near certainty are familiar objects. [2]. In this paper, an associative memory model based on multiple features is proposed. Within this model, feature extraction and classification are carried out by CNN, T-SNE and exponential bidirectional associative memory neural network (EBAM). The geometric features extracted from CNN and the digital features extracted from T-SNE are associated by EBAM. Thus we ensure the recognition of robustness by a comprehensive assessment of the two features. In our model, we can get only 8% error rate with fraudulent data. In systems that require a high safety factor or some key areas, strong robustness is extremely important, if we can ensure the image recognition robustness, network security will be greatly improved and the social production efficiency will be extremely enhanced.
NASA Astrophysics Data System (ADS)
Yan, Yue
2018-03-01
A synthetic aperture radar (SAR) automatic target recognition (ATR) method based on the convolutional neural networks (CNN) trained by augmented training samples is proposed. To enhance the robustness of CNN to various extended operating conditions (EOCs), the original training images are used to generate the noisy samples at different signal-to-noise ratios (SNRs), multiresolution representations, and partially occluded images. Then, the generated images together with the original ones are used to train a designed CNN for target recognition. The augmented training samples can contrapuntally improve the robustness of the trained CNN to the covered EOCs, i.e., the noise corruption, resolution variance, and partial occlusion. Moreover, the significantly larger training set effectively enhances the representation capability for other conditions, e.g., the standard operating condition (SOC), as well as the stability of the network. Therefore, better performance can be achieved by the proposed method for SAR ATR. For experimental evaluation, extensive experiments are conducted on the Moving and Stationary Target Acquisition and Recognition dataset under SOC and several typical EOCs.
Multi-focus image fusion with the all convolutional neural network
NASA Astrophysics Data System (ADS)
Du, Chao-ben; Gao, She-sheng
2018-01-01
A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
Niioka, Hirohiko; Asatani, Satoshi; Yoshimura, Aina; Ohigashi, Hironori; Tagawa, Seiichi; Miyake, Jun
2018-01-01
In the field of regenerative medicine, tremendous numbers of cells are necessary for tissue/organ regeneration. Today automatic cell-culturing system has been developed. The next step is constructing a non-invasive method to monitor the conditions of cells automatically. As an image analysis method, convolutional neural network (CNN), one of the deep learning method, is approaching human recognition level. We constructed and applied the CNN algorithm for automatic cellular differentiation recognition of myogenic C2C12 cell line. Phase-contrast images of cultured C2C12 are prepared as input dataset. In differentiation process from myoblasts to myotubes, cellular morphology changes from round shape to elongated tubular shape due to fusion of the cells. CNN abstract the features of the shape of the cells and classify the cells depending on the culturing days from when differentiation is induced. Changes in cellular shape depending on the number of days of culture (Day 0, Day 3, Day 6) are classified with 91.3% accuracy. Image analysis with CNN has a potential to realize regenerative medicine industry.
Cellular Neural Network for Real Time Image Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vagliasindi, G.; Arena, P.; Fortuna, L.
2008-03-12
Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information formore » plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)« less
Catalytic transfer hydrogenation with terdentate CNN ruthenium complexes: the influence of the base.
Baratta, Walter; Siega, Katia; Rigo, Pierluigi
2007-01-01
The catalytic activity of the terdentate complex [RuCl(CNN)(dppb)] (A) [dppb=Ph(2)P(CH(2))(4)PPh(2); HCNN=6-(4'-methylphenyl)-2-pyridylmethylamine] in the transfer hydrogenation of acetophenone (S) with 2-propanol has been found to be dependent on the base concentration. The limit rate has been observed when NaOiPr is used in high excess (A/base molar ratio > 10). The amino-isopropoxide species [Ru(OiPr)(CNN)(dppb)] (B), which forms by reaction of A with sodium isopropoxide via displacement of the chloride, is catalytically active. The rate of conversion of acetophenone obeys second-order kinetics v=k[S][B] with the rate constants in the range 218+/-8 (40 degrees C) to 3000+/-70 M(-1) s(-1) (80 degrees C). The activation parameters, evaluated from the Eyring equation are DeltaH(++)=14.0+/-0.2 kcal mol(-1) and DeltaS(++)=-3.2 +/-0.5 eu. In a pre-equilibrium reaction with 2-propanol complex B gives the cationic species [Ru(CNN)(dppb)(HOiPr)](+)[OiPr](-) (C) with K approximately 2x10(-5) M. The hydride species [RuH(CNN)(dppb)] (H), which forms from B via beta-hydrogen elimination process, catalyzes the reduction of S and, importantly, its activity increases by addition of base. The catalytic behavior of the hydride H has been compared to that of the system A/NaOiPr (1:1 molar ratio) and indicates that the two systems are equivalent.
Lee, Jae-Hong; Kim, Do-Hyung; Jeong, Seong-Nyum; Choi, Seong-Ho
2018-04-01
The aim of the current study was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the potential usefulness and accuracy of this system for the diagnosis and prediction of periodontally compromised teeth (PCT). Combining pretrained deep CNN architecture and a self-trained network, periapical radiographic images were used to determine the optimal CNN algorithm and weights. The diagnostic and predictive accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, area under the ROC curve, confusion matrix, and 95% confidence intervals (CIs) were calculated using our deep CNN algorithm, based on a Keras framework in Python. The periapical radiographic dataset was split into training (n=1,044), validation (n=348), and test (n=348) datasets. With the deep learning algorithm, the diagnostic accuracy for PCT was 81.0% for premolars and 76.7% for molars. Using 64 premolars and 64 molars that were clinically diagnosed as severe PCT, the accuracy of predicting extraction was 82.8% (95% CI, 70.1%-91.2%) for premolars and 73.4% (95% CI, 59.9%-84.0%) for molars. We demonstrated that the deep CNN algorithm was useful for assessing the diagnosis and predictability of PCT. Therefore, with further optimization of the PCT dataset and improvements in the algorithm, a computer-aided detection system can be expected to become an effective and efficient method of diagnosing and predicting PCT.
Deletion of calponin 2 in macrophages attenuates the severity of inflammatory arthritis in mice.
Huang, Qi-Quan; Hossain, M Moazzem; Sun, Wen; Xing, Lianping; Pope, Richard M; Jin, J-P
2016-10-01
Calponin is an actin cytoskeleton-associated protein that regulates motility-based cellular functions. Three isoforms of calponin are present in vertebrates, among which calponin 2 encoded by the Cnn2 gene is expressed in multiple types of cells, including blood cells from the myeloid lineage. Our previous studies demonstrated that macrophages from Cnn2 knockout (KO) mice exhibit increased migration and phagocytosis. Intrigued by an observation that monocytes and macrophages from patients with rheumatoid arthritis had increased calponin 2, we investigated anti-glucose-6-phosphate isomerase serum-induced arthritis in Cnn2-KO mice for the effect of calponin 2 deletion on the pathogenesis and pathology of inflammatory arthritis. The results showed that the development of arthritis was attenuated in systemic Cnn2-KO mice with significantly reduced inflammation and bone erosion than that in age- and stain background-matched C57BL/6 wild-type mice. In vitro differentiation of calponin 2-null mouse bone marrow cells produced fewer osteoclasts with decreased bone resorption. The attenuation of inflammatory arthritis was confirmed in conditional myeloid cell-specific Cnn2-KO mice. The increased phagocytotic activity of calponin 2-null macrophages may facilitate the clearance of autoimmune complexes and the resolution of inflammation, whereas the decreased substrate adhesion may reduce osteoclastogenesis and bone resorption. The data suggest that calponin 2 regulation of cytoskeleton function plays a novel role in the pathogenesis of inflammatory arthritis, implicating a potentially therapeutic target. Copyright © 2016 the American Physiological Society.
Kim, Hyun-Jung; Kim, Jin-Hee; Song, Yeo-Ju; Seo, Young-Kwon; Park, Jung-Keug; Kim, Chan-Wha
2015-09-01
In this study, we used proteomics to investigate the effects of sonic vibration (SV) on mesenchymal stem cells derived from human umbilical cords (hUC-MSCs) during neural differentiation to understand how SV enhances neural differentiation of hUC-MSCs. We investigated the levels of gene and protein related to neural differentiation after 3 or 5 days in a group treated with 40-Hz SV. In addition, protein expression patterns were compared between the control and the 40-Hz SV-treated hUC-MSC groups via a proteomic approach. Among these proteins, calponin3 (CNN3) was confirmed to have 299 % higher expression in the 40-Hz SV stimulated hUC-MSCs group than that in the control by Western blotting. Notably, overexpression of CNN3-GFP in Chinese hamster ovary (CHO)-K1 cells had positive effects on the stability and reorganization of F-actin compared with that in GFP-transfected cells. Moreover, CNN3 changed the morphology of the cells by making a neurite-like form. After being subjected to SV, messenger RNA (mRNA) levels of glutamate receptors such as PSD95, GluR1, and NR1 as well as intracellular calcium levels were upregulated. These results suggest that the activity of glutamate receptors increased because of CNN3 characteristics. Taken together, these results demonstrate that overexpressed CNN3 during SV increases expression of glutamate receptors and promotes functional neural differentiation of hUC-MSCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yinan; Shi Handuo; Xiong Zhaoxi
We present a unified universal quantum cloning machine, which combines several different existing universal cloning machines together, including the asymmetric case. In this unified framework, the identical pure states are projected equally into each copy initially constituted by input and one half of the maximally entangled states. We show explicitly that the output states of those universal cloning machines are the same. One importance of this unified cloning machine is that the cloning procession is always the symmetric projection, which reduces dramatically the difficulties for implementation. Also, it is found that this unified cloning machine can be directly modified tomore » the general asymmetric case. Besides the global fidelity and the single-copy fidelity, we also present all possible arbitrary-copy fidelities.« less
González, Rodrigo M; Ricardi, Martiniano M; Iusem, Norberto D
2011-05-20
Eukaryotic DNA methylation is one of the most studied epigenetic processes, as it results in a direct and heritable covalent modification triggered by external stimuli. In contrast to mammals, plant DNA methylation, which is stimulated by external cues exemplified by various abiotic types of stress, is often found not only at CG sites but also at CNG (N denoting A, C or T) and CNN (asymmetric) sites. A genome-wide analysis of DNA methylation in Arabidopsis has shown that CNN methylation is preferentially concentrated in transposon genes and non-coding repetitive elements. We are particularly interested in investigating the epigenetics of plant species with larger and more complex genomes than Arabidopsis, particularly with regards to the associated alterations elicited by abiotic stress. We describe the existence of CNN-methylated epialleles that span Asr1, a non-transposon, protein-coding gene from tomato plants that lacks an orthologous counterpart in Arabidopsis. In addition, to test the hypothesis of a link between epigenetics modifications and the adaptation of crop plants to abiotic stress, we exhaustively explored the cytosine methylation status in leaf Asr1 DNA, a model gene in our system, resulting from water-deficit stress conditions imposed on tomato plants. We found that drought conditions brought about removal of methyl marks at approximately 75 of the 110 asymmetric (CNN) sites analysed, concomitantly with a decrease of the repressive H3K27me3 epigenetic mark and a large induction of expression at the RNA level. When pinpointing those sites, we observed that demethylation occurred mostly in the intronic region. These results demonstrate a novel genomic distribution of CNN methylation, namely in the transcribed region of a protein-coding, non-repetitive gene, and the changes in those epigenetic marks that are caused by water stress. These findings may represent a general mechanism for the acquisition of new epialleles in somatic cells, which are pivotal for regulating gene expression in plants.
Umarov, Ramzan Kh; Solovyev, Victor V
2017-01-01
Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene-specific initiation of transcription. In this paper we utilize Convolutional Neural Networks (CNN) to analyze sequence characteristics of prokaryotic and eukaryotic promoters and build their predictive models. We trained a similar CNN architecture on promoters of five distant organisms: human, mouse, plant (Arabidopsis), and two bacteria (Escherichia coli and Bacillus subtilis). We found that CNN trained on sigma70 subclass of Escherichia coli promoter gives an excellent classification of promoters and non-promoter sequences (Sn = 0.90, Sp = 0.96, CC = 0.84). The Bacillus subtilis promoters identification CNN model achieves Sn = 0.91, Sp = 0.95, and CC = 0.86. For human, mouse and Arabidopsis promoters we employed CNNs for identification of two well-known promoter classes (TATA and non-TATA promoters). CNN models nicely recognize these complex functional regions. For human promoters Sn/Sp/CC accuracy of prediction reached 0.95/0.98/0,90 on TATA and 0.90/0.98/0.89 for non-TATA promoter sequences, respectively. For Arabidopsis we observed Sn/Sp/CC 0.95/0.97/0.91 (TATA) and 0.94/0.94/0.86 (non-TATA) promoters. Thus, the developed CNN models, implemented in CNNProm program, demonstrated the ability of deep learning approach to grasp complex promoter sequence characteristics and achieve significantly higher accuracy compared to the previously developed promoter prediction programs. We also propose random substitution procedure to discover positionally conserved promoter functional elements. As the suggested approach does not require knowledge of any specific promoter features, it can be easily extended to identify promoters and other complex functional regions in sequences of many other and especially newly sequenced genomes. The CNNProm program is available to run at web server http://www.softberry.com.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Bladder cancer treatment response assessment using deep learning in CT with transfer learning
NASA Astrophysics Data System (ADS)
Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Samala, Ravi K.; Cohan, Richard H.; Caoili, Elaine M.; Paramagul, Chintana; Alva, Ajjai; Weizer, Alon Z.
2017-03-01
We are developing a CAD system for bladder cancer treatment response assessment in CT. We compared the performance of the deep-learning convolution neural network (DL-CNN) using different network sizes, and with and without transfer learning using natural scene images or regions of interest (ROIs) inside and outside the bladder. The DL-CNN was trained to identify responders (T0 disease) and non-responders to chemotherapy. ROIs were extracted from segmented lesions in pre- and post-treatment scans of a patient and paired to generate hybrid pre-post-treatment paired ROIs. The 87 lesions from 82 patients generated 104 temporal lesion pairs and 6,700 pre-post-treatment paired ROIs. Two-fold cross-validation and receiver operating characteristic analysis were performed and the area under the curve (AUC) was calculated for the DL-CNN estimates. The AUCs for prediction of T0 disease after treatment were 0.77+/-0.08 and 0.75+/-0.08, respectively, for the two partitions using DL-CNN without transfer learning and a small network, and were 0.74+/-0.07 and 0.74+/-0.08 with a large network. The AUCs were 0.73+/-0.08 and 0.62+/-0.08 with transfer learning using a small network pre-trained with bladder ROIs. The AUC values were 0.77+/-0.08 and 0.73+/-0.07 using the large network pre-trained with the same bladder ROIs. With transfer learning using the large network pretrained with the Canadian Institute for Advanced Research (CIFAR-10) data set, the AUCs were 0.72+/-0.06 and 0.64+/-0.09, respectively, for the two partitions. None of the differences in the methods reached statistical significance. Our study demonstrated the feasibility of using DL-CNN for the estimation of treatment response in CT. Transfer learning did not improve the treatment response estimation. The DL-CNN performed better when transfer learning with bladder images was used instead of natural scene images.
Axillary Lymph Node Evaluation Utilizing Convolutional Neural Networks Using MRI Dataset.
Ha, Richard; Chang, Peter; Karcich, Jenika; Mutasa, Simukayi; Fardanesh, Reza; Wynn, Ralph T; Liu, Michael Z; Jambawalikar, Sachin
2018-04-25
The aim of this study is to evaluate the role of convolutional neural network (CNN) in predicting axillary lymph node metastasis, using a breast MRI dataset. An institutional review board (IRB)-approved retrospective review of our database from 1/2013 to 6/2016 identified 275 axillary lymph nodes for this study. Biopsy-proven 133 metastatic axillary lymph nodes and 142 negative control lymph nodes were identified based on benign biopsies (100) and from healthy MRI screening patients (42) with at least 3 years of negative follow-up. For each breast MRI, axillary lymph node was identified on first T1 post contrast dynamic images and underwent 3D segmentation using an open source software platform 3D Slicer. A 32 × 32 patch was then extracted from the center slice of the segmented tumor data. A CNN was designed for lymph node prediction based on each of these cropped images. The CNN consisted of seven convolutional layers and max-pooling layers with 50% dropout applied in the linear layer. In addition, data augmentation and L2 regularization were performed to limit overfitting. Training was implemented using the Adam optimizer, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. Code for this study was written in Python using the TensorFlow module (1.0.0). Experiments and CNN training were done on a Linux workstation with NVIDIA GTX 1070 Pascal GPU. Two class axillary lymph node metastasis prediction models were evaluated. For each lymph node, a final softmax score threshold of 0.5 was used for classification. Based on this, CNN achieved a mean five-fold cross-validation accuracy of 84.3%. It is feasible for current deep CNN architectures to be trained to predict likelihood of axillary lymph node metastasis. Larger dataset will likely improve our prediction model and can potentially be a non-invasive alternative to core needle biopsy and even sentinel lymph node evaluation.
Automated Detection of Fronts using a Deep Learning Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Biard, J. C.; Kunkel, K.; Racah, E.
2017-12-01
A deeper understanding of climate model simulations and the future effects of global warming on extreme weather can be attained through direct analyses of the phenomena that produce weather. Such analyses require these phenomena to be identified in automatic, unbiased, and comprehensive ways. Atmospheric fronts are centrally important weather phenomena because of the variety of significant weather events, such as thunderstorms, directly associated with them. In current operational meteorology, fronts are identified and drawn visually based on the approximate spatial coincidence of a number of quasi-linear localized features - a trough (relative minimum) in air pressure in combination with gradients in air temperature and/or humidity and a shift in wind, and are categorized as cold, warm, stationary, or occluded, with each type exhibiting somewhat different characteristics. Fronts are extended in space with one dimension much larger than the other (often represented by complex curved lines), which poses a significant challenge for automated approaches. We addressed this challenge by using a Deep Learning Convolutional Neural Network (CNN) to automatically identify and classify fronts. The CNN was trained using a "truth" dataset of front locations identified by National Weather Service meteorologists as part of operational 3-hourly surface analyses. The input to the CNN is a set of 5 gridded fields of surface atmospheric variables, including 2m temperature, 2m specific humidity, surface pressure, and the two components of the 10m horizontal wind velocity vector at 3-hr resolution. The output is a set of feature maps containing the per - grid cell probabilities for the presence of the 4 front types. The CNN was trained on a subset of the data and then used to produce front probabilities for each 3-hr time snapshot over a 14-year period covering the continental United States and some adjacent areas. The total frequencies of fronts derived from the CNN outputs matches very well with the truth dataset. There is a slight underestimate in total numbers in the CNN results but the spatial pattern is a close match. The categorization of front types by CNN is best for cold and occluded and worst for warm. These initial results from our ongoing development highlight the great promise of this technology.
Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection
NASA Astrophysics Data System (ADS)
Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant
2014-03-01
Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by leveraging the disconnected feature sets. Evaluation on the public ICPR12 mitosis dataset that has 226 mitoses annotated on 35 High Power Fields (HPF, x400 magnification) by several pathologists and 15 testing HPFs yielded an F-measure of 0.7345. Apart from this being the second best performance ever recorded for this MITOS dataset, our approach is faster and requires fewer computing resources compared to extant methods, making this feasible for clinical use.
Eighth Grade Reading Improvement with CNN Newsroom and "USA Today."
ERIC Educational Resources Information Center
Zamorano, Wanda Jean
A practicum was designed to improve the reading growth and achievement of 60 eighth-grade students who were one or more years behind grade level by utilizing CNN Newsroom and the "USA Today" newspaper as an integral part of the reading program. Pre- and posttests were administered to measure outcomes. The six areas measured were: (1)…
CNN Newsroom Classroom Guides. May 1-29, 1998.
ERIC Educational Resources Information Center
Cable News Network, Atlanta, GA.
CNN Newsroom is a daily 15-minute commercial-free news program specifically produced for classroom use and provided free to participating schools. These guides are designed to accompany the program broadcasts for May 1-29, 1998. Top stories include: effects of a labor strike on Denmark's economy (May 1); the new currency of the European Union, the…
de Beer, D A H; Nesbitt, F D; Bell, G T; Rapuleng, A
2017-04-01
The Universal Anaesthesia Machine has been developed as a complete anaesthesia workstation for use in low- and middle-income countries, where the provision of safe general anaesthesia is often compromised by unreliable supply of electricity and anaesthetic gases. We performed a functional and clinical assessment of this anaesthetic machine, with particular reference to novel features and functioning in the intended environment. The Universal Anaesthesia Machine was found to be reliable, safe and consistent across a range of tests during targeted functional testing. © 2016 The Association of Anaesthetists of Great Britain and Ireland.
NASA Astrophysics Data System (ADS)
Deng, Botao; Abidin, Anas Z.; D'Souza, Adora M.; Nagarajan, Mahesh B.; Coan, Paola; Wismüller, Axel
2017-03-01
The effectiveness of phase contrast X-ray computed tomography (PCI-CT) in visualizing human patellar cartilage matrix has been demonstrated due to its ability to capture soft tissue contrast on a micrometer resolution scale. Recent studies have shown that off-the-shelf Convolutional Neural Network (CNN) features learned from a nonmedical data set can be used for medical image classification. In this paper, we investigate the ability of features extracted from two different CNNs for characterizing chondrocyte patterns in the cartilage matrix. We obtained features from 842 regions of interest annotated on PCI-CT images of human patellar cartilage using CaffeNet and Inception-v3 Network, which were then used in a machine learning task involving support vector machines with radial basis function kernel to classify the ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area (AUC) under the Receiver Operating Characteristic (ROC) curve. The best classification performance was observed with features from Inception-v3 network (AUC = 0.95), which outperforms features extracted from CaffeNet (AUC = 0.91). These results suggest that such characterization of chondrocyte patterns using features from internal layers of CNNs can be used to distinguish between healthy and osteoarthritic tissue with high accuracy.
Comparing deep learning models for population screening using chest radiography
NASA Astrophysics Data System (ADS)
Sivaramakrishnan, R.; Antani, Sameer; Candemir, Sema; Xue, Zhiyun; Abuya, Joseph; Kohli, Marc; Alderson, Philip; Thoma, George
2018-02-01
According to the World Health Organization (WHO), tuberculosis (TB) remains the most deadly infectious disease in the world. In a 2015 global annual TB report, 1.5 million TB related deaths were reported. The conditions worsened in 2016 with 1.7 million reported deaths and more than 10 million people infected with the disease. Analysis of frontal chest X-rays (CXR) is one of the most popular methods for initial TB screening, however, the method is impacted by the lack of experts for screening chest radiographs. Computer-aided diagnosis (CADx) tools have gained significance because they reduce the human burden in screening and diagnosis, particularly in countries that lack substantial radiology services. State-of-the-art CADx software typically is based on machine learning (ML) approaches that use hand-engineered features, demanding expertise in analyzing the input variances and accounting for the changes in size, background, angle, and position of the region of interest (ROI) on the underlying medical imagery. More automatic Deep Learning (DL) tools have demonstrated promising results in a wide range of ML applications. Convolutional Neural Networks (CNN), a class of DL models, have gained research prominence in image classification, detection, and localization tasks because they are highly scalable and deliver superior results with end-to-end feature extraction and classification. In this study, we evaluated the performance of CNN based DL models for population screening using frontal CXRs. The results demonstrate that pre-trained CNNs are a promising feature extracting tool for medical imagery including the automated diagnosis of TB from chest radiographs but emphasize the importance of large data sets for the most accurate classification.
Using CNN Features to Better Understand What Makes Visual Artworks Special.
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond ( richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature.
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Huang, Xia; Qian, Wei
2017-03-01
Deep learning is a trending promising method in medical image analysis area, but how to efficiently prepare the input image for the deep learning algorithms remains a challenge. In this paper, we introduced a novel artificial multichannel region of interest (ROI) generation procedure for convolutional neural networks (CNN). From LIDC database, we collected 54880 benign nodule samples and 59848 malignant nodule samples based on the radiologists' annotations. The proposed CNN consists of three pairs of convolutional layers and two fully connected layers. For each original ROI, two new ROIs were generated: one contains the segmented nodule which highlighted the nodule shape, and the other one contains the gradient of the original ROI which highlighted the textures. By combining the three channel images into a pseudo color ROI, the CNN was trained and tested on the new multichannel ROIs (multichannel ROI II). For the comparison, we generated another type of multichannel image by replacing the gradient image channel with a ROI contains whitened background region (multichannel ROI I). With the 5-fold cross validation evaluation method, the CNN using multichannel ROI II achieved the ROI based area under the curve (AUC) of 0.8823+/-0.0177, compared to the AUC of 0.8484+/-0.0204 generated by the original ROI. By calculating the average of ROI scores from one nodule, the lesion based AUC using multichannel ROI was 0.8793+/-0.0210. By comparing the convolved features maps from CNN using different types of ROIs, it can be noted that multichannel ROI II contains more accurate nodule shapes and surrounding textures.
Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?
Tajbakhsh, Nima; Shin, Jae Y; Gurudu, Suryakanth R; Hurst, R Todd; Kendall, Christopher B; Gotway, Michael B; Jianming Liang
2016-05-01
Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.
A deep convolutional neural network model to classify heartbeats.
Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adam, Muhammad; Gertych, Arkadiusz; Tan, Ru San
2017-10-01
The electrocardiogram (ECG) is a standard test used to monitor the activity of the heart. Many cardiac abnormalities will be manifested in the ECG including arrhythmia which is a general term that refers to an abnormal heart rhythm. The basis of arrhythmia diagnosis is the identification of normal versus abnormal individual heart beats, and their correct classification into different diagnoses, based on ECG morphology. Heartbeats can be sub-divided into five categories namely non-ectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown beats. It is challenging and time-consuming to distinguish these heartbeats on ECG as these signals are typically corrupted by noise. We developed a 9-layer deep convolutional neural network (CNN) to automatically identify 5 different categories of heartbeats in ECG signals. Our experiment was conducted in original and noise attenuated sets of ECG signals derived from a publicly available database. This set was artificially augmented to even out the number of instances the 5 classes of heartbeats and filtered to remove high-frequency noise. The CNN was trained using the augmented data and achieved an accuracy of 94.03% and 93.47% in the diagnostic classification of heartbeats in original and noise free ECGs, respectively. When the CNN was trained with highly imbalanced data (original dataset), the accuracy of the CNN reduced to 89.07%% and 89.3% in noisy and noise-free ECGs. When properly trained, the proposed CNN model can serve as a tool for screening of ECG to quickly identify different types and frequency of arrhythmic heartbeats. Copyright © 2017 Elsevier Ltd. All rights reserved.
Generative Adversarial Networks for Noise Reduction in Low-Dose CT.
Wolterink, Jelmer M; Leiner, Tim; Viergever, Max A; Isgum, Ivana
2017-12-01
Noise is inherent to low-dose CT acquisition. We propose to train a convolutional neural network (CNN) jointly with an adversarial CNN to estimate routine-dose CT images from low-dose CT images and hence reduce noise. A generator CNN was trained to transform low-dose CT images into routine-dose CT images using voxelwise loss minimization. An adversarial discriminator CNN was simultaneously trained to distinguish the output of the generator from routine-dose CT images. The performance of this discriminator was used as an adversarial loss for the generator. Experiments were performed using CT images of an anthropomorphic phantom containing calcium inserts, as well as patient non-contrast-enhanced cardiac CT images. The phantom and patients were scanned at 20% and 100% routine clinical dose. Three training strategies were compared: the first used only voxelwise loss, the second combined voxelwise loss and adversarial loss, and the third used only adversarial loss. The results showed that training with only voxelwise loss resulted in the highest peak signal-to-noise ratio with respect to reference routine-dose images. However, CNNs trained with adversarial loss captured image statistics of routine-dose images better. Noise reduction improved quantification of low-density calcified inserts in phantom CT images and allowed coronary calcium scoring in low-dose patient CT images with high noise levels. Testing took less than 10 s per CT volume. CNN-based low-dose CT noise reduction in the image domain is feasible. Training with an adversarial network improves the CNNs ability to generate images with an appearance similar to that of reference routine-dose CT images.
Using CNN Features to Better Understand What Makes Visual Artworks Special
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond (richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature. PMID:28588537
ERIC Educational Resources Information Center
Tuggle, C. A.
1997-01-01
Examines the amount of coverage given to women's athletics by ESPN SportsCenter and CNN Sports Tonight. Results indicated: both programs devoted only about 5% of their air time to women's sports; story placement and on-camera comments indicated an emphasis on men's athletics; and stories about women involved individual competition, with almost no…
ERIC Educational Resources Information Center
Edwards, B. T.
This program examines the current acceleration of the decision-making cycle in the conduct of foreign policy due to the instantaneous reporting of events, called "The CNN Effect." The sometimes paradoxical consequences of global media coverage are noted, along with the examination of the medium of television itself, and its shortcomings…
NASA Astrophysics Data System (ADS)
He, Huijuan; Huang, Langhuan; Zhong, Zijun; Tan, Shaozao
2018-05-01
Photocatalysis has been widely considered to be an effective way for solving the worldwide environmental pollution issues. Herein, a new type of three-dimensional (3D) ternary graphene-carbon quantum dots/g-C3N4 nanosheet (GA-CQDs/CNN) aerogel visible-light-driven photocatalyst was synthesized via a two-step hydrothermal method. In this unique ternary photocatalyst, both carbon quantum dots (CQDs) and reduced graphene oxide (rGO) could improve the visible light absorption and promote the charge separation. Furthermore, reduced graphene oxide (rGO) could act as a supportor for the 3D framework. Such a ternary system overcame the drawbacks of bulk g-C3N4 (BCN) and achieved the enhanced photocatalytic activity and long-term stability. As a result, the methyl orange (MO) removal ratio of GA-CQDs/CNN-24% was up to 91.1%, which was about 7.6 times higher than that of bulk g-C3N4 (BCN) under the identical conditions. Moreover that GA-CQDs/CNN-24% exhibited inappreciable loss of photocatalytic activity after four-cycle degradation processes. Finally, the photocatalytic mechanism of GA-CQDs/CNN-24% was interpreted both theoretically and experimentally.
MKID digital readout tuning with deep learning
NASA Astrophysics Data System (ADS)
Dodkins, R.; Mahashabde, S.; O'Brien, K.; Thatte, N.; Fruitwala, N.; Walter, A. B.; Meeker, S. R.; Szypryt, P.; Mazin, B. A.
2018-04-01
Microwave Kinetic Inductance Detector (MKID) devices offer inherent spectral resolution, simultaneous read out of thousands of pixels, and photon-limited sensitivity at optical wavelengths. Before taking observations the readout power and frequency of each pixel must be individually tuned, and if the equilibrium state of the pixels change, then the readout must be retuned. This process has previously been performed through manual inspection, and typically takes one hour per 500 resonators (20 h for a ten-kilo-pixel array). We present an algorithm based on a deep convolution neural network (CNN) architecture to determine the optimal bias power for each resonator. The bias point classifications from this CNN model, and those from alternative automated methods, are compared to those from human decisions, and the accuracy of each method is assessed. On a test feed-line dataset, the CNN achieves an accuracy of 90% within 1 dB of the designated optimal value, which is equivalent accuracy to a randomly selected human operator, and superior to the highest scoring alternative automated method by 10%. On a full ten-kilopixel array, the CNN performs the characterization in a matter of minutes - paving the way for future mega-pixel MKID arrays.
Computer assisted optical biopsy for colorectal polyps
NASA Astrophysics Data System (ADS)
Navarro-Avila, Fernando J.; Saint-Hill-Febles, Yadira; Renner, Janis; Klare, Peter; von Delius, Stefan; Navab, Nassir; Mateus, Diana
2017-03-01
We propose a method for computer-assisted optical biopsy for colorectal polyps, with the final goal of assisting the medical expert during the colonoscopy. In particular, we target the problem of automatic classification of polyp images in two classes: adenomatous vs non-adenoma. Our approach is based on recent advancements in convolutional neural networks (CNN) for image representation. In the paper, we describe and compare four different methodologies to address the binary classification task: a baseline with classical features and a Random Forest classifier, two methods based on features obtained from a pre-trained network, and finally, the end-to-end training of a CNN. With the pre-trained network, we show the feasibility of transferring a feature extraction mechanism trained on millions of natural images, to the task of classifying adenomatous polyps. We then demonstrate further performance improvements when training the CNN for our specific classification task. In our study, 776 polyp images were acquired and histologically analyzed after polyp resection. We report a performance increase of the CNN-based approaches with respect to both, the conventional engineered features and to a state-of-the-art method based on videos and 3D shape features.
Low-dose x-ray tomography through a deep convolutional neural network
Yang, Xiaogang; De Andrade, Vincent; Scullin, William; ...
2018-02-07
Synchrotron-based X-ray tomography offers the potential of rapid large-scale reconstructions of the interiors of materials and biological tissue at fine resolution. However, for radiation sensitive samples, there remain fundamental trade-offs between damaging samples during longer acquisition times and reducing signals with shorter acquisition times. We present a deep convolutional neural network (CNN) method that increases the acquired X-ray tomographic signal by at least a factor of 10 during low-dose fast acquisition by improving the quality of recorded projections. Short exposure time projections enhanced with CNN show similar signal to noise ratios as compared with long exposure time projections and muchmore » lower noise and more structural information than low-dose fats acquisition without CNN. We optimized this approach using simulated samples and further validated on experimental nano-computed tomography data of radiation sensitive mouse brains acquired with a transmission X-ray microscopy. We demonstrate that automated algorithms can reliably trace brain structures in datasets collected with low dose-CNN. As a result, this method can be applied to other tomographic or scanning based X-ray imaging techniques and has great potential for studying faster dynamics in specimens.« less
Low-dose x-ray tomography through a deep convolutional neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaogang; De Andrade, Vincent; Scullin, William
Synchrotron-based X-ray tomography offers the potential of rapid large-scale reconstructions of the interiors of materials and biological tissue at fine resolution. However, for radiation sensitive samples, there remain fundamental trade-offs between damaging samples during longer acquisition times and reducing signals with shorter acquisition times. We present a deep convolutional neural network (CNN) method that increases the acquired X-ray tomographic signal by at least a factor of 10 during low-dose fast acquisition by improving the quality of recorded projections. Short exposure time projections enhanced with CNN show similar signal to noise ratios as compared with long exposure time projections and muchmore » lower noise and more structural information than low-dose fats acquisition without CNN. We optimized this approach using simulated samples and further validated on experimental nano-computed tomography data of radiation sensitive mouse brains acquired with a transmission X-ray microscopy. We demonstrate that automated algorithms can reliably trace brain structures in datasets collected with low dose-CNN. As a result, this method can be applied to other tomographic or scanning based X-ray imaging techniques and has great potential for studying faster dynamics in specimens.« less
Urtnasan, Erdenebayar; Park, Jong-Uk; Lee, Kyoung-Joung
2018-05-24
In this paper, we propose a convolutional neural network (CNN)-based deep learning architecture for multiclass classification of obstructive sleep apnea and hypopnea (OSAH) using single-lead electrocardiogram (ECG) recordings. OSAH is the most common sleep-related breathing disorder. Many subjects who suffer from OSAH remain undiagnosed; thus, early detection of OSAH is important. In this study, automatic classification of three classes-normal, hypopnea, and apnea-based on a CNN is performed. An optimal six-layer CNN model is trained on a training dataset (45,096 events) and evaluated on a test dataset (11,274 events). The training set (69 subjects) and test set (17 subjects) were collected from 86 subjects with length of approximately 6 h and segmented into 10 s durations. The proposed CNN model reaches a mean -score of 93.0 for the training dataset and 87.0 for the test dataset. Thus, proposed deep learning architecture achieved a high performance for multiclass classification of OSAH using single-lead ECG recordings. The proposed method can be employed in screening of patients suspected of having OSAH. © 2018 Institute of Physics and Engineering in Medicine.
Classification of volcanic ash particles using a convolutional neural network and probability.
Shoji, Daigo; Noguchi, Rina; Otsuki, Shizuka; Hino, Hideitsu
2018-05-25
Analyses of volcanic ash are typically performed either by qualitatively classifying ash particles by eye or by quantitatively parameterizing its shape and texture. While complex shapes can be classified through qualitative analyses, the results are subjective due to the difficulty of categorizing complex shapes into a single class. Although quantitative analyses are objective, selection of shape parameters is required. Here, we applied a convolutional neural network (CNN) for the classification of volcanic ash. First, we defined four basal particle shapes (blocky, vesicular, elongated, rounded) generated by different eruption mechanisms (e.g., brittle fragmentation), and then trained the CNN using particles composed of only one basal shape. The CNN could recognize the basal shapes with over 90% accuracy. Using the trained network, we classified ash particles composed of multiple basal shapes based on the output of the network, which can be interpreted as a mixing ratio of the four basal shapes. Clustering of samples by the averaged probabilities and the intensity is consistent with the eruption type. The mixing ratio output by the CNN can be used to quantitatively classify complex shapes in nature without categorizing forcibly and without the need for shape parameters, which may lead to a new taxonomy.
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.
Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.
Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.
Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming
2018-05-01
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.
Hand pose estimation in depth image using CNN and random forest
NASA Astrophysics Data System (ADS)
Chen, Xi; Cao, Zhiguo; Xiao, Yang; Fang, Zhiwen
2018-03-01
Thanks to the availability of low cost depth cameras, like Microsoft Kinect, 3D hand pose estimation attracted special research attention in these years. Due to the large variations in hand`s viewpoint and the high dimension of hand motion, 3D hand pose estimation is still challenging. In this paper we propose a two-stage framework which joint with CNN and Random Forest to boost the performance of hand pose estimation. First, we use a standard Convolutional Neural Network (CNN) to regress the hand joints` locations. Second, using a Random Forest to refine the joints from the first stage. In the second stage, we propose a pyramid feature which merges the information flow of the CNN. Specifically, we get the rough joints` location from first stage, then rotate the convolutional feature maps (and image). After this, for each joint, we map its location to each feature map (and image) firstly, then crop features at each feature map (and image) around its location, put extracted features to Random Forest to refine at last. Experimentally, we evaluate our proposed method on ICVL dataset and get the mean error about 11mm, our method is also real-time on a desktop.
Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.
Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan
2018-06-01
Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-01
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks (LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods. PMID:28146106
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks.
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-30
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
Robotic Technology: An Assessment and Forecast,
1984-07-01
Research Associates# Inc. Dr. Roger Nagel# Lehigh University Dr. Charles Rosen# Machine Intelligence Corporations and Mr. Jack Thornton# Robot Insider...amr (Subcontractors: systems for assembly and Adopt Technology# inspection Stanford University. SRI) AFSC MANTECH o McDonnell Douglas o Machine ...supervisory controls man- machine interaction and system integration. - .. _ - Foreign R& The U.S. faces a strong technological challenge in robotics from
Convolutional neural networks and face recognition task
NASA Astrophysics Data System (ADS)
Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.
2017-09-01
Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.
Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard
2018-04-01
To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
Ulrich, Nils H; Ahmadli, Uzeyir; Woernle, Christoph M; Alzarhani, Yahea A; Bertalanffy, Helmut; Kollias, Spyros S
2014-11-01
With continuous refinement of neurosurgical techniques and higher resolution in neuroimaging, the management of pontine lesions is constantly improving. Among pontine structures with vital functions that are at risk of being damaged by surgical manipulation, cranial nerves (CN) and cranial nerve nuclei (CNN) such as CN V, VI, and VII are critical. Pre-operative localization of the intrapontine course of CN and CNN should be beneficial for surgical outcomes. Our objective was to accurately localize CN and CNN in patients with intra-axial lesions in the pons using diffusion tensor imaging (DTI) and estimate its input in surgical planning for avoiding unintended loss of their function during surgery. DTI of the pons obtained pre-operatively on a 3Tesla MR scanner was analyzed prospectively for the accurate localization of CN and CNN V, VI and VII in seven patients with intra-axial lesions in the pons. Anatomical sections in the pons were used to estimate abnormalities on color-coded fractional anisotropy maps. Imaging abnormalities were correlated with CN symptoms before and after surgery. The course of CN and the area of CNN were identified using DTI pre- and post-operatively. Clinical associations between post-operative improvements and the corresponding CN area of the pons were demonstrated. Our results suggest that pre- and post-operative DTI allows identification of key anatomical structures in the pons and enables estimation of their involvement by pathology. It may predict clinical outcome and help us to better understand the involvement of the intrinsic anatomy by pathological processes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spatio-temporal coupling of EEG signals in epilepsy
NASA Astrophysics Data System (ADS)
Senger, Vanessa; Müller, Jens; Tetzlaff, Ronald
2011-05-01
Approximately 1% of the world's population suffer from epileptic seizures throughout their lives that mostly come without sign or warning. Thus, epilepsy is the most common chronical disorder of the neurological system. In the past decades, the problem of detecting a pre-seizure state in epilepsy using EEG signals has been addressed in many contributions by various authors over the past two decades. Up to now, the goal of identifying an impending epileptic seizure with sufficient specificity and reliability has not yet been achieved. Cellular Nonlinear Networks (CNN) are characterized by local couplings of dynamical systems of comparably low complexity. Thus, they are well suited for an implementation as highly parallel analogue processors. Programmable sensor-processor realizations of CNN combine high computational power comparable to tera ops of digital processors with low power consumption. An algorithm allowing an automated and reliable detection of epileptic seizure precursors would be a"huge step" towards the vision of an implantable seizure warning device that could provide information to patients and for a time/event specific treatment directly in the brain. Recent contributions have shown that modeling of brain electrical activity by solutions of Reaction-Diffusion-CNN as well as the application of a CNN predictor taking into account values of neighboring electrodes may contribute to the realization of a seizure warning device. In this paper, a CNN based predictor corresponding to a spatio-temporal filter is applied to multi channel EEG data in order to identify mutual couplings for different channels which lead to a enhanced prediction quality. Long term EEG recordings of different patients are considered. Results calculated for these recordings with inter-ictal phases as well as phases with seizures will be discussed in detail.
Lee, Young Han
2018-04-04
The purposes of this study are to evaluate the feasibility of protocol determination with a convolutional neural networks (CNN) classifier based on short-text classification and to evaluate the agreements by comparing protocols determined by CNN with those determined by musculoskeletal radiologists. Following institutional review board approval, the database of a hospital information system (HIS) was queried for lists of MRI examinations, referring department, patient age, and patient gender. These were exported to a local workstation for analyses: 5258 and 1018 consecutive musculoskeletal MRI examinations were used for the training and test datasets, respectively. The subjects for pre-processing were routine or tumor protocols and the contents were word combinations of the referring department, region, contrast media (or not), gender, and age. The CNN Embedded vector classifier was used with Word2Vec Google news vectors. The test set was tested with each classification model and results were output as routine or tumor protocols. The CNN determinations were evaluated using the receiver operating characteristic (ROC) curves. The accuracies were evaluated by a radiologist-confirmed protocol as the reference protocols. The optimal cut-off values for protocol determination between routine protocols and tumor protocols was 0.5067 with a sensitivity of 92.10%, a specificity of 95.76%, and an area under curve (AUC) of 0.977. The overall accuracy was 94.2% for the ConvNet model. All MRI protocols were correct in the pelvic bone, upper arm, wrist, and lower leg MRIs. Deep-learning-based convolutional neural networks were clinically utilized to determine musculoskeletal MRI protocols. CNN-based text learning and applications could be extended to other radiologic tasks besides image interpretations, improving the work performance of the radiologist.
Does the Fast Patrol Boat Have a Future in the Navy?
2002-05-31
Admiral Dennis Blair (Commander and Chief, United States Pacific Command) testified to Congress “countering terrorism, weapons proliferation...United States Navy. Blair, Dennis C., Admiral, USN. 2001a. Interview by Maria Ressa, CNN Jakarta Bureau, December 1. Interview transcript on-line...Available from http://www. pacom.mil/speeches/sst2001/011201blairCNN.htm. Internet accessed 3 March 2002. Blair, Dennis C., Admiral, USN. 2001b
Deep learning with non-medical training used for chest pathology identification
NASA Astrophysics Data System (ADS)
Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit
2015-03-01
In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.
DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites
NASA Astrophysics Data System (ADS)
Kalia, S.; Li, S.; Ganguly, S.; Nemani, R. R.
2017-12-01
Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remotesensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud/shadow mask from geostationary satellite data iscritical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds, which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classifycloud/shadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoder-decoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multi-spectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.
Highly productive CNN pincer ruthenium catalysts for the asymmetric reduction of alkyl aryl ketones.
Baratta, Walter; Chelucci, Giorgio; Magnolia, Santo; Siega, Katia; Rigo, Pierluigi
2009-01-01
Chiral pincer ruthenium complexes of formula [RuCl(CNN)(Josiphos)] (2-7; Josiphos = 1-[1-(dicyclohexylphosphano)ethyl]-2-(diarylphosphano)ferrocene) have been prepared by treating [RuCl(2)(PPh(3))(3)] with (S,R)-Josiphos diphosphanes and 1-substituted-1-(6-arylpyridin-2-yl)methanamines (HCNN; substituent = H (1 a), Me (1 b), and tBu (1 c)) with NEt(3). By using 1 b and 1 c as a racemic mixture, complexes 4-7 were obtained through a diastereoselective synthesis promoted by acetic acid. These pincer complexes, which display correctly matched chiral PP and CNN ligands, are remarkably active catalysts for the asymmetric reduction of alkyl aryl ketones in basic alcohol media by both transfer hydrogenation (TH) and hydrogenation (HY), achieving enantioselectivities of up to 99 %. In 2-propanol, the enantioselective TH of ketones was accomplished by using a catalyst loading as low as 0.002 mol % and afforded a turnover frequency (TOF) of 10(5)-10(6) h(-1) (60 and 82 degrees C). In methanol/ethanol mixtures, the CNN pincer complexes catalyzed the asymmetric HY of ketones with H(2) (5 atm) at 0.01 mol % relative to the complex with a TOF of approximately 10(4) h(-1) at 40 degrees C.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
Single-image-based Rain Detection and Removal via CNN
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Fu, Chengzhou
2018-04-01
The quality of the image is degraded by rain streaks, which have negative impact when we extract image features for many visual tasks, such as feature extraction for classification and recognition, tracking, surveillance and autonomous navigation. Hence, it is necessary to detect and remove rain streaks from single images, which is a challenging problem since we have no spatial-temporal information of rain streaks compared to the dynamic video stream. Inspired by the priori that the rain streaks have almost the same feature, such as the direction or the thickness, although they are in different types of real-world images. The paper aims at proposing an effective convolutional neural network (CNN) to detect and remove rain streaks from single image. Two models of synthesized rainy image, linear additive composite model (LACM model) and screen blend model (SCM model), are considered in this paper. The main idea is that it is easier for our CNN network to find the mapping between the rainy image and rain streaks than between the rainy image and clean image. The reason is that rain streaks have fixed features, but clean images have various features. The experiments show that the designed CNN network outperforms state-of-the-art approaches on both synthesized and real-world images, which indicates the effectiveness of our proposed framework.
NASA Astrophysics Data System (ADS)
Kim, Taehwan; Kim, Sungho
2017-02-01
This paper presents a novel method to detect the remote pedestrians. After producing the human temperature based brightness enhancement image using the temperature data input, we generates the regions of interest (ROIs) by the multiscale contrast filtering based approach including the biased hysteresis threshold and clustering, remote pedestrian's height, pixel area and central position information. Afterwards, we conduct local vertical and horizontal projection based ROI refinement and weak aspect ratio based ROI limitation to solve the problem of region expansion in the contrast filtering stage. Finally, we detect the remote pedestrians by validating the final ROIs using transfer learning with convolutional neural network (CNN) feature, following non-maximal suppression (NMS) with strong aspect ratio limitation to improve the detection performance. In the experimental results, we confirmed that the proposed contrast filtering and locally projected region based CNN (CFLP-CNN) outperforms the baseline method by 8% in term of logaveraged miss rate. Also, the proposed method is more effective than the baseline approach and the proposed method provides the better regions that are suitably adjusted to the shape and appearance of remote pedestrians, which makes it detect the pedestrian that didn't find in the baseline approach and are able to help detect pedestrians by splitting the people group into a person.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
TELNET under Single-Connection TCP Specification
1976-02-02
Manager User Oriented Systems International Business Machines Corp. K54-282, Monterey and Cottle Roads San Jose, CA 95193 Dr. Leonard Y. Liu...Manager Computer Science International Business Machines Corp. K51-282, Monterey and Cottle Roads San Jose, CA 95193 Mr. Harry Reinstein... International Business Machines Corp. 1501 California Avenue Palo Alto, Ca 94303 Illinois, University of Mr. John D. Day University of Illinois Center for
NASA Astrophysics Data System (ADS)
de Garidel-Thoron, T.; Marchant, R.; Soto, E.; Gally, Y.; Beaufort, L.; Bolton, C. T.; Bouslama, M.; Licari, L.; Mazur, J. C.; Brutti, J. M.; Norsa, F.
2017-12-01
Foraminifera tests are the main proxy carriers for paleoceanographic reconstructions. Both geochemical and taxonomical studies require large numbers of tests to achieve statistical relevance. To date, the extraction of foraminifera from the sediment coarse fraction is still done by hand and thus time-consuming. Moreover, the recognition of morphotypes, ecologically relevant, requires some taxonomical skills not easily taught. The automatic recognition and extraction of foraminifera would largely help paleoceanographers to overcome these issues. Recent advances in automatic image classification using machine learning opens the way to automatic extraction of foraminifera. Here we detail progress on the design of an automatic picking machine as part of the FIRST project. The machine handles 30 pre-sieved samples (100-1000µm), separating them into individual particles (including foraminifera) and imaging each in pseudo-3D. The particles are classified and specimens of interest are sorted either for Individual Foraminifera Analyses (44 per slide) and/or for classical multiple analyses (8 morphological classes per slide, up to 1000 individuals per hole). The classification is based on machine learning using Convolutional Neural Networks (CNNs), similar to the approach used in the coccolithophorid imaging system SYRACO. To prove its feasibility, we built two training image datasets of modern planktonic foraminifera containing approximately 2000 and 5000 images each, corresponding to 15 & 25 morphological classes. Using a CNN with a residual topology (ResNet) we achieve over 95% correct classification for each dataset. We tested the network on 160,000 images from 45 depths of a sediment core from the Pacific ocean, for which we have human counts. The current algorithm is able to reproduce the downcore variability in both Globigerinoides ruber and the fragmentation index (r2 = 0.58 and 0.88 respectively). The FIRST prototype yields some promising results for high-resolution paleoceanographic studies and evolutionary studies.
Use of machine learning methods to classify Universities based on the income structure
NASA Astrophysics Data System (ADS)
Terlyga, Alexandra; Balk, Igor
2017-10-01
In this paper we discuss use of machine learning methods such as self organizing maps, k-means and Ward’s clustering to perform classification of universities based on their income. This classification will allow us to quantitate classification of universities as teaching, research, entrepreneur, etc. which is important tool for government, corporations and general public alike in setting expectation and selecting universities to achieve different goals.
A De-Identification Pipeline for Ultrasound Medical Images in DICOM Format.
Monteiro, Eriksson; Costa, Carlos; Oliveira, José Luís
2017-05-01
Clinical data sharing between healthcare institutions, and between practitioners is often hindered by privacy protection requirements. This problem is critical in collaborative scenarios where data sharing is fundamental for establishing a workflow among parties. The anonymization of patient information burned in DICOM images requires elaborate processes somewhat more complex than simple de-identification of textual information. Usually, before sharing, there is a need for manual removal of specific areas containing sensitive information in the images. In this paper, we present a pipeline for ultrasound medical image de-identification, provided as a free anonymization REST service for medical image applications, and a Software-as-a-Service to streamline automatic de-identification of medical images, which is freely available for end-users. The proposed approach applies image processing functions and machine-learning models to bring about an automatic system to anonymize medical images. To perform character recognition, we evaluated several machine-learning models, being Convolutional Neural Networks (CNN) selected as the best approach. For accessing the system quality, 500 processed images were manually inspected showing an anonymization rate of 89.2%. The tool can be accessed at https://bioinformatics.ua.pt/dicom/anonymizer and it is available with the most recent version of Google Chrome, Mozilla Firefox and Safari. A Docker image containing the proposed service is also publicly available for the community.
Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer
1997-01-01
A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy
2017-03-01
In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.
Stability Training for Convolutional Neural Nets in LArTPC
NASA Astrophysics Data System (ADS)
Lindsay, Matt; Wongjirad, Taritree
2017-01-01
Convolutional Neural Nets (CNNs) are the state of the art for many problems in computer vision and are a promising method for classifying interactions in Liquid Argon Time Projection Chambers (LArTPCs) used in neutrino oscillation experiments. Despite the good performance of CNN's, they are not without drawbacks, chief among them is vulnerability to noise and small perturbations to the input. One solution to this problem is a modification to the learning process called Stability Training developed by Zheng et al. We verify existing work and demonstrate volatility caused by simple Gaussian noise and also that the volatility can be nearly eliminated with Stability Training. We then go further and show that a traditional CNN is also vulnerable to realistic experimental noise and that a stability trained CNN remains accurate despite noise. This further adds to the optimism for CNNs for work in LArTPCs and other applications.
Boosting CNN performance for lung texture classification using connected filtering
NASA Astrophysics Data System (ADS)
Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves
2018-02-01
Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.
Detection of masses in mammogram images using CNN, geostatistic functions and SVM.
Sampaio, Wener Borges; Diniz, Edgar Moraes; Silva, Aristófanes Corrêa; de Paiva, Anselmo Cardoso; Gattass, Marcelo
2011-08-01
Breast cancer occurs with high frequency among the world's population and its effects impact the patients' perception of their own sexuality and their very personal image. This work presents a computational methodology that helps specialists detect breast masses in mammogram images. The first stage of the methodology aims to improve the mammogram image. This stage consists in removing objects outside the breast, reducing noise and highlighting the internal structures of the breast. Next, cellular neural networks are used to segment the regions that might contain masses. These regions have their shapes analyzed through shape descriptors (eccentricity, circularity, density, circular disproportion and circular density) and their textures analyzed through geostatistic functions (Ripley's K function and Moran's and Geary's indexes). Support vector machines are used to classify the candidate regions as masses or non-masses, with sensitivity of 80%, rates of 0.84 false positives per image and 0.2 false negatives per image, and an area under the ROC curve of 0.87. Copyright © 2011 Elsevier Ltd. All rights reserved.
A method of vehicle license plate recognition based on PCANet and compressive sensing
NASA Astrophysics Data System (ADS)
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
Convolutional neural network with transfer learning for rice type classification
NASA Astrophysics Data System (ADS)
Patel, Vaibhav Amit; Joshi, Manjunath V.
2018-04-01
Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.
Carpenter, Kristy A; Huang, Xudong
2018-06-07
Virtual Screening (VS) has emerged as an important tool in the drug development process, as it conducts efficient in silico searches over millions of compounds, ultimately increasing yields of potential drug leads. As a subset of Artificial Intelligence (AI), Machine Learning (ML) is a powerful way of conducting VS for drug leads. ML for VS generally involves assembling a filtered training set of compounds, comprised of known actives and inactives. After training the model, it is validated and, if sufficiently accurate, used on previously unseen databases to screen for novel compounds with desired drug target binding activity. The study aims to review ML-based methods used for VS and applications to Alzheimer's disease (AD) drug discovery. To update the current knowledge on ML for VS, we review thorough backgrounds, explanations, and VS applications of the following ML techniques: Naïve Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Random Forests (RF), and Artificial Neural Networks (ANN). All techniques have found success in VS, but the future of VS is likely to lean more heavily toward the use of neural networks - and more specifically, Convolutional Neural Networks (CNN), which are a subset of ANN that utilize convolution. We additionally conceptualize a work flow for conducting ML-based VS for potential therapeutics of for AD, a complex neurodegenerative disease with no known cure and prevention. This both serves as an example of how to apply the concepts introduced earlier in the review and as a potential workflow for future implementation. Different ML techniques are powerful tools for VS, and they have advantages and disadvantages albeit. ML-based VS can be applied to AD drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne
2018-04-01
Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Keshavamurthy, Krishna N.; Leary, Owen P.; Merck, Lisa H.; Kimia, Benjamin; Collins, Scott; Wright, David W.; Allen, Jason W.; Brock, Jeffrey F.; Merck, Derek
2017-03-01
Traumatic brain injury (TBI) is a major cause of death and disability in the United States. Time to treatment is often related to patient outcome. Access to cerebral imaging data in a timely manner is a vital component of patient care. Current methods of detecting and quantifying intracranial pathology can be time-consuming and require careful review of 2D/3D patient images by a radiologist. Additional time is needed for image protocoling, acquisition, and processing. These steps often occur in series, adding more time to the process and potentially delaying time-dependent management decisions for patients with traumatic brain injury. Our team adapted machine learning and computer vision methods to develop a technique that rapidly and automatically detects CT-identifiable lesions. Specifically, we use scale invariant feature transform (SIFT)1 and deep convolutional neural networks (CNN)2 to identify important image features that can distinguish TBI lesions from background data. Our learning algorithm is a linear support vector machine (SVM)3. Further, we also employ tools from topological data analysis (TDA) for gleaning insights into the correlation patterns between healthy and pathological data. The technique was validated using 409 CT scans of the brain, acquired via the Progesterone for the Treatment of Traumatic Brain Injury phase III clinical trial (ProTECT_III) which studied patients with moderate to severe TBI4. CT data were annotated by a central radiologist and included patients with positive and negative scans. Additionally, the largest lesion on each positive scan was manually segmented. We reserved 80% of the data for training the SVM and used the remaining 20% for testing. Preliminary results are promising with 92.55% prediction accuracy (sensitivity = 91.15%, specificity = 93.45%), indicating the potential usefulness of this technique in clinical scenarios.
Two-qubit quantum cloning machine and quantum correlation broadcasting
NASA Astrophysics Data System (ADS)
Kheirollahi, Azam; Mohammadi, Hamidreza; Akhtarshenas, Seyed Javad
2016-11-01
Due to the axioms of quantum mechanics, perfect cloning of an unknown quantum state is impossible. But since imperfect cloning is still possible, a question arises: "Is there an optimal quantum cloning machine?" Buzek and Hillery answered this question and constructed their famous B-H quantum cloning machine. The B-H machine clones the state of an arbitrary single qubit in an optimal manner and hence it is universal. Generalizing this machine for a two-qubit system is straightforward, but during this procedure, except for product states, this machine loses its universality and becomes a state-dependent cloning machine. In this paper, we propose some classes of optimal universal local quantum state cloners for a particular class of two-qubit systems, more precisely, for a class of states with known Schmidt basis. We then extend our machine to the case that the Schmidt basis of the input state is deviated from the local computational basis of the machine. We show that more local quantum coherence existing in the input state corresponds to less fidelity between the input and output states. Also we present two classes of a state-dependent local quantum copying machine. Furthermore, we investigate local broadcasting of two aspects of quantum correlations, i.e., quantum entanglement and quantum discord, defined, respectively, within the entanglement-separability paradigm and from an information-theoretic perspective. The results show that although quantum correlation is, in general, very fragile during the broadcasting procedure, quantum discord is broadcasted more robustly than quantum entanglement.
Wells, Christopher J.; Alano, Abraham
2013-01-01
Introduction Risky sexual behavior among Ethiopian university students, especially females, is a major contributor to young adult morbidity and mortality. Ambaw et al. found that female university students in Ethiopia may fear the humiliation associated with procuring condoms. A study in Thailand suggests condom machines may provide comfortable condom procurement, but the relevance to a high-risk African context is unknown. The objective of this study was to examine if the installation of condom machines in Ethiopia predicts changes in student condom uptake and use, as well as changes in procurement related stigma. Methods Students at a large urban university in Southern Ethiopia completed self reported surveys in 2010 (N = 2,155 surveys) and again in 2011 (N = 2,000), six months after the installation of condom machines. Mann-Whitney and Chi-square tests were conducted to evaluate significant changes in student sexual behavior, as well as condom procurement and associated stigma over the subsequent one year period. Results After installing condom machines, the average number of trips made to procure condoms on-campus significantly increased 101% for sexually active females and significantly decreased 36% for sexually active males. Additionally, reports of condom use during last sexual intercourse showed a non-significant 4.3% increase for females and a significant 9.0% increase for males. During this time, comfort procuring condoms and ability to convince sexual partners to use condoms were significantly higher for sexually active male students. There was no evidence that the condom machines led to an increase in promiscuity. Conclusions The results suggest that condom machines may be associated with more condom procurement among vulnerable female students in Ethiopia and could be an important component of a comprehensive university health policy. PMID:23565272
2011-06-30
things. - Gerald M. Weinberg 1 Author‟ s Note: This paper is a theoretical exercise that attempts to deliver one possible Army...military and the overarching web of government agencies and international actors could approach Mexico‟ s current issues- however, this is a purely...interact.” 2 Raj Kumar, Why Mexico‟ s Violence is America‟ s Problem (CNN Opinion, April 11, 2011, http://www.cnn.com/2011/OPINION/04/11
Mission Driven Scene Understanding: Candidate Model Training and Validation
2016-09-01
driven scene understanding. One of the candidate engines that we are evaluating is a convolutional neural network (CNN) program installed on a Windows 10...Theano-AlexNet6,7) installed on a Windows 10 notebook computer. To the best of our knowledge, an implementation of the open-source, Python-based...AlexNet CNN on a Windows notebook computer has not been previously reported. In this report, we present progress toward the proof-of-principle testing
A novel deep learning approach for classification of EEG motor imagery signals.
Tabar, Yousef Rezaei; Halici, Ugur
2017-02-01
Signal classification is an important issue in brain computer interface (BCI) systems. Deep learning approaches have been used successfully in many recent studies to learn features and classify different types of data. However, the number of studies that employ these approaches on BCI applications is very limited. In this study we aim to use deep learning methods to improve classification performance of EEG motor imagery signals. In this study we investigate convolutional neural networks (CNN) and stacked autoencoders (SAE) to classify EEG Motor Imagery signals. A new form of input is introduced to combine time, frequency and location information extracted from EEG signal and it is used in CNN having one 1D convolutional and one max-pooling layers. We also proposed a new deep network by combining CNN and SAE. In this network, the features that are extracted in CNN are classified through the deep network SAE. The classification performance obtained by the proposed method on BCI competition IV dataset 2b in terms of kappa value is 0.547. Our approach yields 9% improvement over the winner algorithm of the competition. Our results show that deep learning methods provide better classification performance compared to other state of art approaches. These methods can be applied successfully to BCI systems where the amount of data is large due to daily recording.
Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification
Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.
2016-01-01
Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN. PMID:27795661
Tang, Tianyu; Zhou, Shilin; Deng, Zhipeng; Zou, Huanxin; Lei, Lin
2017-02-10
Detecting vehicles in aerial imagery plays an important role in a wide range of applications. The current vehicle detection methods are mostly based on sliding-window search and handcrafted or shallow-learning-based features, having limited description capability and heavy computational costs. Recently, due to the powerful feature representations, region convolutional neural networks (CNN) based detection methods have achieved state-of-the-art performance in computer vision, especially Faster R-CNN. However, directly using it for vehicle detection in aerial images has many limitations: (1) region proposal network (RPN) in Faster R-CNN has poor performance for accurately locating small-sized vehicles, due to the relatively coarse feature maps; and (2) the classifier after RPN cannot distinguish vehicles and complex backgrounds well. In this study, an improved detection method based on Faster R-CNN is proposed in order to accomplish the two challenges mentioned above. Firstly, to improve the recall, we employ a hyper region proposal network (HRPN) to extract vehicle-like targets with a combination of hierarchical feature maps. Then, we replace the classifier after RPN by a cascade of boosted classifiers to verify the candidate regions, aiming at reducing false detection by negative example mining. We evaluate our method on the Munich vehicle dataset and the collected vehicle dataset, with improvements in accuracy and robustness compared to existing methods.
NASA Technical Reports Server (NTRS)
Kalia, Subodh; Ganguly, Sangram; Li, Shuang; Nemani, Ramakrishna R.
2017-01-01
Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remote sensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud shadow mask from geostationary satellite data is critical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds,which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classify cloudshadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoderdecoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multispectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.
50th Anniversary of the Civil Rights Act of 1964
2014-06-23
Members of the audience listen as U.S. Representative Eddie Bernice Johnson, of Texas; Dr. Harriet Jenkins, Former Assistant Administrator for Equal Opportunity Programs at NASA; Dr. Roger Launius, Associate Director of Collections and Curatorial Affairs at the Smithsonian National Air and Space Museum; and Dr. Michael Eric Dyson, a professor of sociology at Georgetown University; speak on a panel moderated by Suzanne Malveaux, of CNN, at an event celebrating the 50th Anniversary of the Civil Rights Act of 1964 on Monday, June 23, 2014 in the James E. Webb Auditorium at NASA Headquarters in Washington, DC. The event highlighted the influence of the Civil Rights Act on NASA. Photo Credit: (NASA/Joel Kowsky)
An obstacle to building a time machine
NASA Astrophysics Data System (ADS)
Carroll, Sean M.; Farhi, Edward; Guth, Alan H.
1992-01-01
Gott (1991) has shown that a spacetime with two infinite parallel cosmic strings passing each other with sufficient velocity contains closed timelike curves. An attempt to build such a time machine is discussed. Using the energy-momentum conservation laws in the equivalent (2 + 1)-dimensional theory, the spacetime representing the decay of one gravitating particle into two is explicitly constructed; there is never enough mass in an open universe to build the time machine from the products of decays of stationary particles. More generally, the Gott time machine cannot exist in any open (2 + 1)-dimensional universe for which the total momentum is timelike.
Pinzon-Morales, Ruben-Dario; Hirata, Yutaka
2014-01-01
To acquire and maintain precise movement controls over a lifespan, changes in the physical and physiological characteristics of muscles must be compensated for adaptively. The cerebellum plays a crucial role in such adaptation. Changes in muscle characteristics are not always symmetrical. For example, it is unlikely that muscles that bend and straighten a joint will change to the same degree. Thus, different (i.e., asymmetrical) adaptation is required for bending and straightening motions. To date, little is known about the role of the cerebellum in asymmetrical adaptation. Here, we investigate the cerebellar mechanisms required for asymmetrical adaptation using a bi-hemispheric cerebellar neuronal network model (biCNN). The bi-hemispheric structure is inspired by the observation that lesioning one hemisphere reduces motor performance asymmetrically. The biCNN model was constructed to run in real-time and used to control an unstable two-wheeled balancing robot. The load of the robot and its environment were modified to create asymmetrical perturbations. Plasticity at parallel fiber-Purkinje cell synapses in the biCNN model was driven by error signal in the climbing fiber (cf) input. This cf input was configured to increase and decrease its firing rate from its spontaneous firing rate (approximately 1 Hz) with sensory errors in the preferred and non-preferred direction of each hemisphere, as demonstrated in the monkey cerebellum. Our results showed that asymmetrical conditions were successfully handled by the biCNN model, in contrast to a single hemisphere model or a classical non-adaptive proportional and derivative controller. Further, the spontaneous activity of the cf, while relatively small, was critical for balancing the contribution of each cerebellar hemisphere to the overall motor command sent to the robot. Eliminating the spontaneous activity compromised the asymmetrical learning capabilities of the biCNN model. Thus, we conclude that a bi-hemispheric structure and adequate spontaneous activity of cf inputs are critical for cerebellar asymmetrical motor learning.
Wang, Tao; Hao, Xin-Qi; Zhang, Xiao-Xue; Gong, Jun-Fang; Song, Mao-Ping
2011-09-21
N-substituted-2-aminomethyl-6-phenylpyridines 2a-c have been easily prepared from commercially available 6-bromo-2-picolinaldehyde in two steps. Reaction of 2a-c with PdCl(2) in toluene in the presence of triethylamine gave the CNN pincer Pd(II) complexes 3a-c in 18-28% yields. The CNN pincer Ru(II) complex 5 containing a Ru-NHR functionality could be obtained in a 71% yield by treatment of 2c with a Ru(II) precursor instead of PdCl(2). Additionally, the related CNN pincer Ru(II) complex 7 containing a Ru-NH(2) functionality has been synthesized by the reaction of 2-aminomethyl-6-phenylpyridine with the same Ru(II) precursor in a 68% yield. All the new compounds were characterized by elemental analysis (MS for ligands), (1)H, (13)C NMR, (31)P{(1)H} NMR (for Ru complexes) and IR spectra. Molecular structures of Pd complex 3c as well as Ru complexes 5 and 7 have been determined by X-ray single-crystal diffraction. The obtained Pd complexes 3a-c were effective catalysts for the allylation of aldehydes as well as for three-component allylation of aldehydes, arylamines and allyltributyltin and their activity was found to be much higher than a related NCN Pd(II) pincer in the allylation of aldehyde. On the other hand, the two new CNN pincer Ru(II) complexes 5 and 7 displayed excellent catalytic activity in the transfer hydrogenation of ketones in refluxing 2-propanol with the latter being much more active. The final TOF values were up to 4510 h(-1) with 0.01 mol% of 5 and 220,800 h(-1) with 0.005 mol% of 7, respectively. This journal is © The Royal Society of Chemistry 2011
Automatic bladder segmentation from CT images using deep CNN and 3D fully connected CRF-RNN.
Xu, Xuanang; Zhou, Fugen; Liu, Bo
2018-03-19
Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach. The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result. We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net. Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.
Yasaka, Koichiro; Akai, Hiroyuki; Abe, Osamu; Kiryu, Shigeru
2018-03-01
Purpose To investigate diagnostic performance by using a deep learning method with a convolutional neural network (CNN) for the differentiation of liver masses at dynamic contrast agent-enhanced computed tomography (CT). Materials and Methods This clinical retrospective study used CT image sets of liver masses over three phases (noncontrast-agent enhanced, arterial, and delayed). Masses were diagnosed according to five categories (category A, classic hepatocellular carcinomas [HCCs]; category B, malignant liver tumors other than classic and early HCCs; category C, indeterminate masses or mass-like lesions [including early HCCs and dysplastic nodules] and rare benign liver masses other than hemangiomas and cysts; category D, hemangiomas; and category E, cysts). Supervised training was performed by using 55 536 image sets obtained in 2013 (from 460 patients, 1068 sets were obtained and they were augmented by a factor of 52 [rotated, parallel-shifted, strongly enlarged, and noise-added images were generated from the original images]). The CNN was composed of six convolutional, three maximum pooling, and three fully connected layers. The CNN was tested with 100 liver mass image sets obtained in 2016 (74 men and 26 women; mean age, 66.4 years ± 10.6 [standard deviation]; mean mass size, 26.9 mm ± 25.9; 21, nine, 35, 20, and 15 liver masses for categories A, B, C, D, and E, respectively). Training and testing were performed five times. Accuracy for categorizing liver masses with CNN model and the area under receiver operating characteristic curve for differentiating categories A-B versus categories C-E were calculated. Results Median accuracy of differential diagnosis of liver masses for test data were 0.84. Median area under the receiver operating characteristic curve for differentiating categories A-B from C-E was 0.92. Conclusion Deep learning with CNN showed high diagnostic performance in differentiation of liver masses at dynamic CT. © RSNA, 2017 Online supplemental material is available for this article.
Karimi, Davood; Samei, Golnoosh; Kesch, Claudia; Nir, Guy; Salcudean, Septimiu E
2018-05-15
Most of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues. Our CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques. Our proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results. Prior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.
NASA Astrophysics Data System (ADS)
Qin, Wenjian; Wu, Jia; Han, Fei; Yuan, Yixuan; Zhao, Wei; Ibragimov, Bulat; Gu, Jia; Xing, Lei
2018-05-01
Segmentation of liver in abdominal computed tomography (CT) is an important step for radiation therapy planning of hepatocellular carcinoma. Practically, a fully automatic segmentation of liver remains challenging because of low soft tissue contrast between liver and its surrounding organs, and its highly deformable shape. The purpose of this work is to develop a novel superpixel-based and boundary sensitive convolutional neural network (SBBS-CNN) pipeline for automated liver segmentation. The entire CT images were first partitioned into superpixel regions, where nearby pixels with similar CT number were aggregated. Secondly, we converted the conventional binary segmentation into a multinomial classification by labeling the superpixels into three classes: interior liver, liver boundary, and non-liver background. By doing this, the boundary region of the liver was explicitly identified and highlighted for the subsequent classification. Thirdly, we computed an entropy-based saliency map for each CT volume, and leveraged this map to guide the sampling of image patches over the superpixels. In this way, more patches were extracted from informative regions (e.g. the liver boundary with irregular changes) and fewer patches were extracted from homogeneous regions. Finally, deep CNN pipeline was built and trained to predict the probability map of the liver boundary. We tested the proposed algorithm in a cohort of 100 patients. With 10-fold cross validation, the SBBS-CNN achieved mean Dice similarity coefficients of 97.31 ± 0.36% and average symmetric surface distance of 1.77 ± 0.49 mm. Moreover, it showed superior performance in comparison with state-of-art methods, including U-Net, pixel-based CNN, active contour, level-sets and graph-cut algorithms. SBBS-CNN provides an accurate and effective tool for automated liver segmentation. It is also envisioned that the proposed framework is directly applicable in other medical image segmentation scenarios.
Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen
2018-04-26
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Pinzon-Morales, Ruben-Dario; Hirata, Yutaka
2014-01-01
To acquire and maintain precise movement controls over a lifespan, changes in the physical and physiological characteristics of muscles must be compensated for adaptively. The cerebellum plays a crucial role in such adaptation. Changes in muscle characteristics are not always symmetrical. For example, it is unlikely that muscles that bend and straighten a joint will change to the same degree. Thus, different (i.e., asymmetrical) adaptation is required for bending and straightening motions. To date, little is known about the role of the cerebellum in asymmetrical adaptation. Here, we investigate the cerebellar mechanisms required for asymmetrical adaptation using a bi-hemispheric cerebellar neuronal network model (biCNN). The bi-hemispheric structure is inspired by the observation that lesioning one hemisphere reduces motor performance asymmetrically. The biCNN model was constructed to run in real-time and used to control an unstable two-wheeled balancing robot. The load of the robot and its environment were modified to create asymmetrical perturbations. Plasticity at parallel fiber-Purkinje cell synapses in the biCNN model was driven by error signal in the climbing fiber (cf) input. This cf input was configured to increase and decrease its firing rate from its spontaneous firing rate (approximately 1 Hz) with sensory errors in the preferred and non-preferred direction of each hemisphere, as demonstrated in the monkey cerebellum. Our results showed that asymmetrical conditions were successfully handled by the biCNN model, in contrast to a single hemisphere model or a classical non-adaptive proportional and derivative controller. Further, the spontaneous activity of the cf, while relatively small, was critical for balancing the contribution of each cerebellar hemisphere to the overall motor command sent to the robot. Eliminating the spontaneous activity compromised the asymmetrical learning capabilities of the biCNN model. Thus, we conclude that a bi-hemispheric structure and adequate spontaneous activity of cf inputs are critical for cerebellar asymmetrical motor learning. PMID:25414644
Energy Harvesting for Soft-Matter Machines and Electronics
2016-06-09
AFRL-AFOSR-VA-TR-2016-0353 Energy Harvesting for Soft-Matter Machines and Electronics Carmel Majidi CARNEGIE MELLON UNIVERSITY Final Report 06/09...ES) CARNEGIE MELLON UNIVERSITY 5000 FORBES AVENUE PITTSBURGH, PA 15213-3815 US 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING...livelink.ebs.afrl.af.mil/livelink/llisapi.dll DISTRIBUTION A: Distribution approved for public release. Carnegie Mellon University MECHANICAL ENGINEERING FINAL
A CNN based neurobiology inspired approach for retinal image quality assessment.
Mahapatra, Dwarikanath; Roy, Pallab K; Sedai, Suman; Garnavi, Rahil
2016-08-01
Retinal image quality assessment (IQA) algorithms use different hand crafted features for training classifiers without considering the working of the human visual system (HVS) which plays an important role in IQA. We propose a convolutional neural network (CNN) based approach that determines image quality using the underlying principles behind the working of the HVS. CNNs provide a principled approach to feature learning and hence higher accuracy in decision making. Experimental results demonstrate the superior performance of our proposed algorithm over competing methods.
Strategy and Structure for Online News Production - Case Studies of CNN and NRK
NASA Astrophysics Data System (ADS)
Krumsvik, Arne H.
This cross-national comparative case study of online news production analyzes the strategies of Cable News Network (CNN) and the Norwegian Broadcasting Corporation (NRK), aiming at understanding of the implications of organizational strategy on the role of journalists, explains why traditional media organizations have a tendency to develop a multi-platform approach (distributing content on several platforms, such as television, online, mobile) rather than developing the cross-media (with interplay between media types) or multimedia approach anticipated by both scholars and practitioners.
NASA Technical Reports Server (NTRS)
Lundquist, Eugene E; Schwartz, Edward B
1942-01-01
The results of a theoretical and experimental investigation to determine the critical compression load for a universal testing machine are presented for specimens loaded through knife edges. The critical load for the testing machine is the load at which one of the loading heads becomes laterally instable in relation to the other. For very short specimens the critical load was found to be less than the rated capacity given by the manufacturer for the machine. A load-length diagram is proposed for defining the safe limits of the test region for the machine. Although this report is particularly concerned with a universal testing machine of a certain type, the basic theory which led to the derivation of the general equation for the critical load, P (sub cr) = alpha L can be applied to any testing machine operated in compression where the specimen is loaded through knife edges. In this equation, L is the length of the specimen between knife edges and alpha is the force necessary to displace the upper end of the specimen unit horizontal distance relative to the lower end of the specimen in a direction normal to the knife edges through which the specimen is loaded.
Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.
2016-01-01
Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584
ERIC Educational Resources Information Center
Krejsler, John Benedicto
2013-01-01
"The modernizing machine" codes individual bodies, things, and symbols with images from New Public Management, neo-liberal, and Knowledge Economy discourses. Drawing on Deleuze and Guattari's concept of machines, this article explores how "the modernizing machine" produces neo-liberal modernization of the public sector. Taking…
Border-oriented post-processing refinement on detected vehicle bounding box for ADAS
NASA Astrophysics Data System (ADS)
Chen, Xinyuan; Zhang, Zhaoning; Li, Minne; Li, Dongsheng
2018-04-01
We investigate a new approach for improving localization accuracy of detected vehicles for object detection in advanced driver assistance systems(ADAS). Specifically, we implement a bounding box refinement as a post-processing of the state-of-the-art object detectors (Faster R-CNN, YOLOv2, etc.). The bounding box refinement is achieved by individually adjusting each border of the detected bounding box to its target location using a regression method. We use HOG features which perform well on the edge detection of vehicles to train the regressor and the regressor is independent of the CNN-based object detectors. Experiment results on the KITTI 2012 benchmark show that we can achieve up to 6% improvements over YOLOv2 and Faster R-CNN object detectors on the IoU threshold of 0.8. Also, the proposed refinement framework is computationally light, allowing for processing one bounding box within a few milliseconds on CPU. Further, this refinement method can be added to any object detectors, especially those with high speed but less accuracy.
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery
NASA Astrophysics Data System (ADS)
Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.
2018-04-01
Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.
A deep convolutional neural network for recognizing foods
NASA Astrophysics Data System (ADS)
Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec
2015-12-01
Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.
Seismic waveform classification using deep learning
NASA Astrophysics Data System (ADS)
Kong, Q.; Allen, R. M.
2017-12-01
MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing. It has an Artificial Neural Network (ANN) algorithm running on the phone to distinguish earthquake motion from human activities recorded by the accelerometer on board. Once the ANN detects earthquake-like motion, it sends a 5-min chunk of acceleration data back to the server for further analysis. The time-series data collected contains both earthquake data and human activity data that the ANN confused. In this presentation, we will show the Convolutional Neural Network (CNN) we built under the umbrella of supervised learning to find out the earthquake waveform. The waveforms of the recorded motion could treat easily as images, and by taking the advantage of the power of CNN processing the images, we achieved very high successful rate to select the earthquake waveforms out. Since there are many non-earthquake waveforms than the earthquake waveforms, we also built an anomaly detection algorithm using the CNN. Both these two methods can be easily extended to other waveform classification problems.
Peyster, Eliot G.; Frank, Renee; Margulies, Kenneth B.; Feldman, Michael D.
2018-01-01
Over 26 million people worldwide suffer from heart failure annually. When the cause of heart failure cannot be identified, endomyocardial biopsy (EMB) represents the gold-standard for the evaluation of disease. However, manual EMB interpretation has high inter-rater variability. Deep convolutional neural networks (CNNs) have been successfully applied to detect cancer, diabetic retinopathy, and dermatologic lesions from images. In this study, we develop a CNN classifier to detect clinical heart failure from H&E stained whole-slide images from a total of 209 patients, 104 patients were used for training and the remaining 105 patients for independent testing. The CNN was able to identify patients with heart failure or severe pathology with a 99% sensitivity and 94% specificity on the test set, outperforming conventional feature-engineering approaches. Importantly, the CNN outperformed two expert pathologists by nearly 20%. Our results suggest that deep learning analytics of EMB can be used to predict cardiac outcome. PMID:29614076
Cross-Domain Shoe Retrieval with a Semantic Hierarchy of Attribute Classification Network.
Zhan, Huijing; Shi, Boxin; Kot, Alex C
2017-08-04
Cross-domain shoe image retrieval is a challenging problem, because the query photo from the street domain (daily life scenario) and the reference photo in the online domain (online shop images) have significant visual differences due to the viewpoint and scale variation, self-occlusion, and cluttered background. This paper proposes the Semantic Hierarchy Of attributE Convolutional Neural Network (SHOE-CNN) with a three-level feature representation for discriminative shoe feature expression and efficient retrieval. The SHOE-CNN with its newly designed loss function systematically merges semantic attributes of closer visual appearances to prevent shoe images with the obvious visual differences being confused with each other; the features extracted from image, region, and part levels effectively match the shoe images across different domains. We collect a large-scale shoe dataset composed of 14341 street domain and 12652 corresponding online domain images with fine-grained attributes to train our network and evaluate our system. The top-20 retrieval accuracy improves significantly over the solution with the pre-trained CNN features.
Nirschl, Jeffrey J; Janowczyk, Andrew; Peyster, Eliot G; Frank, Renee; Margulies, Kenneth B; Feldman, Michael D; Madabhushi, Anant
2018-01-01
Over 26 million people worldwide suffer from heart failure annually. When the cause of heart failure cannot be identified, endomyocardial biopsy (EMB) represents the gold-standard for the evaluation of disease. However, manual EMB interpretation has high inter-rater variability. Deep convolutional neural networks (CNNs) have been successfully applied to detect cancer, diabetic retinopathy, and dermatologic lesions from images. In this study, we develop a CNN classifier to detect clinical heart failure from H&E stained whole-slide images from a total of 209 patients, 104 patients were used for training and the remaining 105 patients for independent testing. The CNN was able to identify patients with heart failure or severe pathology with a 99% sensitivity and 94% specificity on the test set, outperforming conventional feature-engineering approaches. Importantly, the CNN outperformed two expert pathologists by nearly 20%. Our results suggest that deep learning analytics of EMB can be used to predict cardiac outcome.
Ali, Habiba I; Jarrar, Amjad H; Abo-El-Enen, Mostafa; Al Shamsi, Mariam; Al Ashqar, Huda
2015-05-28
Increasing the healthfulness of campus food environments is an important step in promoting healthful food choices among college students. This study explored university students' suggestions on promoting healthful food choices from campus vending machines. It also examined factors influencing students' food choices from vending machines. Peer-led semi-structured individual interviews were conducted with 43 undergraduate students (33 females and 10 males) recruited from students enrolled in an introductory nutrition course in a large national university in the United Arab Emirates. Interviews were audiotaped, transcribed, and coded to generate themes using N-Vivo software. Accessibility, peer influence, and busy schedules were the main factors influencing students' food choices from campus vending machines. Participants expressed the need to improve the nutritional quality of the food items sold in the campus vending machines. Recommendations for students' nutrition educational activities included placing nutrition tips on or beside the vending machines and using active learning methods, such as competitions on nutrition knowledge. The results of this study have useful applications in improving the campus food environment and nutrition education opportunities at the university to assist students in making healthful food choices.
Structured illumination of the interface between centriole and peri-centriolar material
Fu, Jingyan; Glover, David M.
2012-01-01
The increase in centrosome size in mitosis was described over a century ago, and yet it is poorly understood how centrioles, which lie at the core of centrosomes, organize the pericentriolar material (PCM) in this process. Now, structured illumination microscopy reveals in Drosophila that, before clouds of PCM appear, its proteins are closely associated with interphase centrioles in two tube-like layers: an inner layer occupied by centriolar microtubules, Sas-4, Spd-2 and Polo kinase; and an outer layer comprising Pericentrin-like protein (Dplp), Asterless (Asl) and Plk4 kinase. Centrosomin (Cnn) and γ-tubulin associate with this outer tube in G2 cells and, upon mitotic entry, Polo activity is required to recruit them together with Spd-2 into PCM clouds. Cnn is required for Spd-2 to expand into the PCM during this maturation process but can itself contribute to PCM independently of Spd-2. By contrast, the centrioles of spermatocytes elongate from a pre-existing proximal unit during the G2 preceding meiosis. Sas-4 is restricted to the microtubule-associated, inner cylinder and Dplp and Cnn to the outer cylinder of this proximal part. γ-Tubulin and Asl associate with the outer cylinder and Spd-2 with the inner cylinder throughout the entire G2 centriole. Although they occupy different spatial compartments on the G2 centriole, Cnn, Spd-2 and γ-tubulin become diminished at the centriole upon entry into meiosis to become part of PCM clouds. PMID:22977736
Deep 3D convolution neural network for CT brain hemorrhage classification
NASA Astrophysics Data System (ADS)
Jnawali, Kamal; Arbabshirani, Mohammad R.; Rao, Navalgund; Patel, Alpen A.
2018-02-01
Intracranial hemorrhage is a critical conditional with the high mortality rate that is typically diagnosed based on head computer tomography (CT) images. Deep learning algorithms, in particular, convolution neural networks (CNN), are becoming the methodology of choice in medical image analysis for a variety of applications such as computer-aided diagnosis, and segmentation. In this study, we propose a fully automated deep learning framework which learns to detect brain hemorrhage based on cross sectional CT images. The dataset for this work consists of 40,367 3D head CT studies (over 1.5 million 2D images) acquired retrospectively over a decade from multiple radiology facilities at Geisinger Health System. The proposed algorithm first extracts features using 3D CNN and then detects brain hemorrhage using the logistic function as the last layer of the network. Finally, we created an ensemble of three different 3D CNN architectures to improve the classification accuracy. The area under the curve (AUC) of the receiver operator characteristic (ROC) curve of the ensemble of three architectures was 0.87. Their results are very promising considering the fact that the head CT studies were not controlled for slice thickness, scanner type, study protocol or any other settings. Moreover, the proposed algorithm reliably detected various types of hemorrhage within the skull. This work is one of the first applications of 3D CNN trained on a large dataset of cross sectional medical images for detection of a critical radiological condition
NASA Astrophysics Data System (ADS)
Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen
2017-03-01
Intuitive segmentation-based CADx/radiomic features, calculated from the lesion segmentations of dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) have been utilized in the task of distinguishing between malignant and benign lesions. Additionally, transfer learning with pre-trained deep convolutional neural networks (CNNs) allows for an alternative method of radiomics extraction, where the features are derived directly from the image data. However, the comparison of computer-extracted segmentation-based and CNN features in MRI breast lesion characterization has not yet been conducted. In our study, we used a DCE-MRI database of 640 breast cases - 191 benign and 449 malignant. Thirty-eight segmentation-based features were extracted automatically using our quantitative radiomics workstation. Also, 2D ROIs were selected around each lesion on the DCE-MRIs and directly input into a pre-trained CNN AlexNet, yielding CNN features. Each method was investigated separately and in combination in terms of performance in the task of distinguishing between benign and malignant lesions. Area under the ROC curve (AUC) served as the figure of merit. Both methods yielded promising classification performance with round-robin cross-validated AUC values of 0.88 (se =0.01) and 0.76 (se=0.02) for segmentationbased and deep learning methods, respectively. Combination of the two methods enhanced the performance in malignancy assessment resulting in an AUC value of 0.91 (se=0.01), a statistically significant improvement over the performance of the CNN method alone.
SAR image classification based on CNN in real and simulation datasets
NASA Astrophysics Data System (ADS)
Peng, Lijiang; Liu, Ming; Liu, Xiaohua; Dong, Liquan; Hui, Mei; Zhao, Yuejin
2018-04-01
Convolution neural network (CNN) has made great success in image classification tasks. Even in the field of synthetic aperture radar automatic target recognition (SAR-ATR), state-of-art results has been obtained by learning deep representation of features on the MSTAR benchmark. However, the raw data of MSTAR have shortcomings in training a SAR-ATR model because of high similarity in background among the SAR images of each kind. This indicates that the CNN would learn the hierarchies of features of backgrounds as well as the targets. To validate the influence of the background, some other SAR images datasets have been made which contains the simulation SAR images of 10 manufactured targets such as tank and fighter aircraft, and the backgrounds of simulation SAR images are sampled from the whole original MSTAR data. The simulation datasets contain the dataset that the backgrounds of each kind images correspond to the one kind of backgrounds of MSTAR targets or clutters and the dataset that each image shares the random background of whole MSTAR targets or clutters. In addition, mixed datasets of MSTAR and simulation datasets had been made to use in the experiments. The CNN architecture proposed in this paper are trained on all datasets mentioned above. The experimental results shows that the architecture can get high performances on all datasets even the backgrounds of the images are miscellaneous, which indicates the architecture can learn a good representation of the targets even though the drastic changes on background.