Use of Unlabeled Samples for Mitigating the Hughes Phenomenon
NASA Technical Reports Server (NTRS)
Landgrebe, David A.; Shahshahani, Behzad M.
1993-01-01
The use of unlabeled samples in improving the performance of classifiers is studied. When the number of training samples is fixed and small, additional feature measurements may reduce the performance of a statistical classifier. It is shown that by using unlabeled samples, estimates of the parameters can be improved and therefore this phenomenon may be mitigated. Various methods for using unlabeled samples are reviewed and experimental results are provided.
Cross-Domain Semi-Supervised Learning Using Feature Formulation.
Xingquan Zhu
2011-12-01
Semi-Supervised Learning (SSL) traditionally makes use of unlabeled samples by including them into the training set through an automated labeling process. Such a primitive Semi-Supervised Learning (pSSL) approach suffers from a number of disadvantages including false labeling and incapable of utilizing out-of-domain samples. In this paper, we propose a formative Semi-Supervised Learning (fSSL) framework which explores hidden features between labeled and unlabeled samples to achieve semi-supervised learning. fSSL regards that both labeled and unlabeled samples are generated from some hidden concepts with labeling information partially observable for some samples. The key of the fSSL is to recover the hidden concepts, and take them as new features to link labeled and unlabeled samples for semi-supervised learning. Because unlabeled samples are only used to generate new features, but not to be explicitly included in the training set like pSSL does, fSSL overcomes the inherent disadvantages of the traditional pSSL methods, especially for samples not within the same domain as the labeled instances. Experimental results and comparisons demonstrate that fSSL significantly outperforms pSSL-based methods for both within-domain and cross-domain semi-supervised learning.
NASA Astrophysics Data System (ADS)
Castelletti, Davide; Demir, Begüm; Bruzzone, Lorenzo
2014-10-01
This paper presents a novel semisupervised learning (SSL) technique defined in the context of ɛ-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed SSL method aims to mitigate the problems of small-sized biased training sets without collecting any additional samples with reference measures. This is achieved on the basis of two consecutive steps. The first step is devoted to inject additional priors information in the learning phase of the SVR in order to adapt the importance of each training sample according to distribution of the unlabeled samples. To this end, a weight is initially associated to each training sample based on a novel strategy that defines higher weights for the samples located in the high density regions of the feature space while giving reduced weights to those that fall into the low density regions of the feature space. Then, in order to exploit different weights for training samples in the learning phase of the SVR, we introduce a weighted SVR (WSVR) algorithm. The second step is devoted to jointly exploit labeled and informative unlabeled samples for further improving the definition of the WSVR learning function. To this end, the most informative unlabeled samples that have an expected accurate target values are initially selected according to a novel strategy that relies on the distribution of the unlabeled samples in the feature space and on the WSVR function estimated at the first step. Then, we introduce a restructured WSVR algorithm that jointly uses labeled and unlabeled samples in the learning phase of the WSVR algorithm and tunes their importance by different values of regularization parameters. Experimental results obtained for the estimation of single-tree stem volume show the effectiveness of the proposed SSL method.
Torii, Manabu; Yin, Lanlan; Nguyen, Thang; Mazumdar, Chand T.; Liu, Hongfang; Hartley, David M.; Nelson, Noele P.
2014-01-01
Purpose Early detection of infectious disease outbreaks is crucial to protecting the public health of a society. Online news articles provide timely information on disease outbreaks worldwide. In this study, we investigated automated detection of articles relevant to disease outbreaks using machine learning classifiers. In a real-life setting, it is expensive to prepare a training data set for classifiers, which usually consists of manually labeled relevant and irrelevant articles. To mitigate this challenge, we examined the use of randomly sampled unlabeled articles as well as labeled relevant articles. Methods Naïve Bayes and Support Vector Machine (SVM) classifiers were trained on 149 relevant and 149 or more randomly sampled unlabeled articles. Diverse classifiers were trained by varying the number of sampled unlabeled articles and also the number of word features. The trained classifiers were applied to 15 thousand articles published over 15 days. Top-ranked articles from each classifier were pooled and the resulting set of 1337 articles was reviewed by an expert analyst to evaluate the classifiers. Results Daily averages of areas under ROC curves (AUCs) over the 15-day evaluation period were 0.841 and 0.836, respectively, for the naïve Bayes and SVM classifier. We referenced a database of disease outbreak reports to confirm that this evaluation data set resulted from the pooling method indeed covered incidents recorded in the database during the evaluation period. Conclusions The proposed text classification framework utilizing randomly sampled unlabeled articles can facilitate a cost-effective approach to training machine learning classifiers in a real-life Internet-based biosurveillance project. We plan to examine this framework further using larger data sets and using articles in non-English languages. PMID:21134784
Active learning based segmentation of Crohns disease from abdominal MRI.
Mahapatra, Dwarikanath; Vos, Franciscus M; Buhmann, Joachim M
2016-05-01
This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Optimizing area under the ROC curve using semi-supervised learning
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M.
2014-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.1 PMID:25395692
Optimizing area under the ROC curve using semi-supervised learning.
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M
2015-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.
Semi-Supervised Projective Non-Negative Matrix Factorization for Cancer Classification.
Zhang, Xiang; Guan, Naiyang; Jia, Zhilong; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Advances in DNA microarray technologies have made gene expression profiles a significant candidate in identifying different types of cancers. Traditional learning-based cancer identification methods utilize labeled samples to train a classifier, but they are inconvenient for practical application because labels are quite expensive in the clinical cancer research community. This paper proposes a semi-supervised projective non-negative matrix factorization method (Semi-PNMF) to learn an effective classifier from both labeled and unlabeled samples, thus boosting subsequent cancer classification performance. In particular, Semi-PNMF jointly learns a non-negative subspace from concatenated labeled and unlabeled samples and indicates classes by the positions of the maximum entries of their coefficients. Because Semi-PNMF incorporates statistical information from the large volume of unlabeled samples in the learned subspace, it can learn more representative subspaces and boost classification performance. We developed a multiplicative update rule (MUR) to optimize Semi-PNMF and proved its convergence. The experimental results of cancer classification for two multiclass cancer gene expression profile datasets show that Semi-PNMF outperforms the representative methods.
NASA Astrophysics Data System (ADS)
Liu, Jianjun; Kan, Jianquan
2018-04-01
In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.
Improved semi-supervised online boosting for object tracking
NASA Astrophysics Data System (ADS)
Li, Yicui; Qi, Lin; Tan, Shukun
2016-10-01
The advantage of an online semi-supervised boosting method which takes object tracking problem as a classification problem, is training a binary classifier from labeled and unlabeled examples. Appropriate object features are selected based on real time changes in the object. However, the online semi-supervised boosting method faces one key problem: The traditional self-training using the classification results to update the classifier itself, often leads to drifting or tracking failure, due to the accumulated error during each update of the tracker. To overcome the disadvantages of semi-supervised online boosting based on object tracking methods, the contribution of this paper is an improved online semi-supervised boosting method, in which the learning process is guided by positive (P) and negative (N) constraints, termed P-N constraints, which restrict the labeling of the unlabeled samples. First, we train the classification by an online semi-supervised boosting. Then, this classification is used to process the next frame. Finally, the classification is analyzed by the P-N constraints, which are used to verify if the labels of unlabeled data assigned by the classifier are in line with the assumptions made about positive and negative samples. The proposed algorithm can effectively improve the discriminative ability of the classifier and significantly alleviate the drifting problem in tracking applications. In the experiments, we demonstrate real-time tracking of our tracker on several challenging test sequences where our tracker outperforms other related on-line tracking methods and achieves promising tracking performance.
Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition
Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen
2018-01-01
Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642
Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling
NASA Astrophysics Data System (ADS)
Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji
We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.
Active learning in the presence of unlabelable examples
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; Wagstaff, Kiri
2004-01-01
We propose a new active learning framework where the expert labeler is allowed to decline to label any example. This may be necessary because the true label is unknown or because the example belongs to a class that is not part of the real training problem. We show that within this framework, popular active learning algorithms (such as Simple) may perform worse than random selection because they make so many queries to the unlabelable class. We present a method by which any active learning algorithm can be modified to avoid unlabelable examples by training a second classifier to distinguish between the labelable and unlabelable classes. We also demonstrate the effectiveness of the method on two benchmark data sets and a real-world problem.
Cheng, Zhanzhan; Zhou, Shuigeng; Wang, Yang; Liu, Hui; Guan, Jihong; Chen, Yi-Ping Phoebe
2016-05-18
Prediction of compound-protein interactions (CPIs) is to find new compound-protein pairs where a protein is targeted by at least a compound, which is a crucial step in new drug design. Currently, a number of machine learning based methods have been developed to predict new CPIs in the literature. However, as there is not yet any publicly available set of validated negative CPIs, most existing machine learning based approaches use the unknown interactions (not validated CPIs) selected randomly as the negative examples to train classifiers for predicting new CPIs. Obviously, this is not quite reasonable and unavoidably impacts the CPI prediction performance. In this paper, we simply take the unknown CPIs as unlabeled examples, and propose a new method called PUCPI (the abbreviation of PU learning for Compound-Protein Interaction identification) that employs biased-SVM (Support Vector Machine) to predict CPIs using only positive and unlabeled examples. PU learning is a class of learning methods that leans from positive and unlabeled (PU) samples. To the best of our knowledge, this is the first work that identifies CPIs using only positive and unlabeled examples. We first collect known CPIs as positive examples and then randomly select compound-protein pairs not in the positive set as unlabeled examples. For each CPI/compound-protein pair, we extract protein domains as protein features and compound substructures as chemical features, then take the tensor product of the corresponding compound features and protein features as the feature vector of the CPI/compound-protein pair. After that, biased-SVM is employed to train classifiers on different datasets of CPIs and compound-protein pairs. Experiments over various datasets show that our method outperforms six typical classifiers, including random forest, L1- and L2-regularized logistic regression, naive Bayes, SVM and k-nearest neighbor (kNN), and three types of existing CPI prediction models. Source code, datasets and related documents of PUCPI are available at: http://admis.fudan.edu.cn/projects/pucpi.html.
Engineering Considerations for Hydroxide Treatment of Training Ranges
2007-06-01
solutions were compared to the untreated controls. [14C] labeled samples were counted on a Packard Instruments liquid scin - tillation counter (Model...and the soil was removed to a scin - tillation vial. Unlabeled flasks had the soil and liquid analyzed for TOC and the liquid analyzed for anion content
Target discrimination method for SAR images based on semisupervised co-training
NASA Astrophysics Data System (ADS)
Wang, Yan; Du, Lan; Dai, Hui
2018-01-01
Synthetic aperture radar (SAR) target discrimination is usually performed in a supervised manner. However, supervised methods for SAR target discrimination may need lots of labeled training samples, whose acquirement is costly, time consuming, and sometimes impossible. This paper proposes an SAR target discrimination method based on semisupervised co-training, which utilizes a limited number of labeled samples and an abundant number of unlabeled samples. First, Lincoln features, widely used in SAR target discrimination, are extracted from the training samples and partitioned into two sets according to their physical meanings. Second, two support vector machine classifiers are iteratively co-trained with the extracted two feature sets based on the co-training algorithm. Finally, the trained classifiers are exploited to classify the test data. The experimental results on real SAR images data not only validate the effectiveness of the proposed method compared with the traditional supervised methods, but also demonstrate the superiority of co-training over self-training, which only uses one feature set.
Amis, Gregory P; Carpenter, Gail A
2010-03-01
Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/. Copyright 2009 Elsevier Ltd. All rights reserved.
Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun
2011-07-01
Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.
A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L
2011-01-01
Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less
Using partially labeled data for normal mixture identification with application to class definition
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.
Local Rademacher Complexity: sharper risk bounds with and without unlabeled samples.
Oneto, Luca; Ghio, Alessandro; Ridella, Sandro; Anguita, Davide
2015-05-01
We derive in this paper a new Local Rademacher Complexity risk bound on the generalization ability of a model, which is able to take advantage of the availability of unlabeled samples. Moreover, this new bound improves state-of-the-art results even when no unlabeled samples are available. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bayes estimation on parameters of the single-class classifier. [for remotely sensed crop data
NASA Technical Reports Server (NTRS)
Lin, G. C.; Minter, T. C.
1976-01-01
Normal procedures used for designing a Bayes classifier to classify wheat as the major crop of interest require not only training samples of wheat but also those of nonwheat. Therefore, ground truth must be available for the class of interest plus all confusion classes. The single-class Bayes classifier classifies data into the class of interest or the class 'other' but requires training samples only from the class of interest. This paper will present a procedure for Bayes estimation on the mean vector, covariance matrix, and a priori probability of the single-class classifier using labeled samples from the class of interest and unlabeled samples drawn from the mixture density function.
Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data.
Sun, Wenqing; Tseng, Tzu-Liang Bill; Zhang, Jianying; Qian, Wei
2017-04-01
In this study we developed a graph based semi-supervised learning (SSL) scheme using deep convolutional neural network (CNN) for breast cancer diagnosis. CNN usually needs a large amount of labeled data for training and fine tuning the parameters, and our proposed scheme only requires a small portion of labeled data in training set. Four modules were included in the diagnosis system: data weighing, feature selection, dividing co-training data labeling, and CNN. 3158 region of interests (ROIs) with each containing a mass extracted from 1874 pairs of mammogram images were used for this study. Among them 100 ROIs were treated as labeled data while the rest were treated as unlabeled. The area under the curve (AUC) observed in our study was 0.8818, and the accuracy of CNN is 0.8243 using the mixed labeled and unlabeled data. Copyright © 2016. Published by Elsevier Ltd.
Joint Sparse Recovery With Semisupervised MUSIC
NASA Astrophysics Data System (ADS)
Wen, Zaidao; Hou, Biao; Jiao, Licheng
2017-05-01
Discrete multiple signal classification (MUSIC) with its low computational cost and mild condition requirement becomes a significant noniterative algorithm for joint sparse recovery (JSR). However, it fails in rank defective problem caused by coherent or limited amount of multiple measurement vectors (MMVs). In this letter, we provide a novel sight to address this problem by interpreting JSR as a binary classification problem with respect to atoms. Meanwhile, MUSIC essentially constructs a supervised classifier based on the labeled MMVs so that its performance will heavily depend on the quality and quantity of these training samples. From this viewpoint, we develop a semisupervised MUSIC (SS-MUSIC) in the spirit of machine learning, which declares that the insufficient supervised information in the training samples can be compensated from those unlabeled atoms. Instead of constructing a classifier in a fully supervised manner, we iteratively refine a semisupervised classifier by exploiting the labeled MMVs and some reliable unlabeled atoms simultaneously. Through this way, the required conditions and iterations can be greatly relaxed and reduced. Numerical experimental results demonstrate that SS-MUSIC can achieve much better recovery performances than other MUSIC extended algorithms as well as some typical greedy algorithms for JSR in terms of iterations and recovery probability.
SemiBoost: boosting for semi-supervised learning.
Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi
2009-11-01
Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.
Sample Complexity Bounds for Differentially Private Learning
Chaudhuri, Kamalika; Hsu, Daniel
2013-01-01
This work studies the problem of privacy-preserving classification – namely, learning a classifier from sensitive data while preserving the privacy of individuals in the training set. In particular, the learning algorithm is required in this problem to guarantee differential privacy, a very strong notion of privacy that has gained significant attention in recent years. A natural question to ask is: what is the sample requirement of a learning algorithm that guarantees a certain level of privacy and accuracy? We address this question in the context of learning with infinite hypothesis classes when the data is drawn from a continuous distribution. We first show that even for very simple hypothesis classes, any algorithm that uses a finite number of examples and guarantees differential privacy must fail to return an accurate classifier for at least some unlabeled data distributions. This result is unlike the case with either finite hypothesis classes or discrete data domains, in which distribution-free private learning is possible, as previously shown by Kasiviswanathan et al. (2008). We then consider two approaches to differentially private learning that get around this lower bound. The first approach is to use prior knowledge about the unlabeled data distribution in the form of a reference distribution chosen independently of the sensitive data. Given such a reference , we provide an upper bound on the sample requirement that depends (among other things) on a measure of closeness between and the unlabeled data distribution. Our upper bound applies to the non-realizable as well as the realizable case. The second approach is to relax the privacy requirement, by requiring only label-privacy – namely, that the only labels (and not the unlabeled parts of the examples) be considered sensitive information. An upper bound on the sample requirement of learning with label privacy was shown by Chaudhuri et al. (2006); in this work, we show a lower bound. PMID:25285183
Unlabeled probes for the detection and typing of herpes simplex virus.
Dames, Shale; Pattison, David C; Bromley, L Kathryn; Wittwer, Carl T; Voelkerding, Karl V
2007-10-01
Unlabeled probe detection with a double-stranded DNA (dsDNA) binding dye is one method to detect and confirm target amplification after PCR. Unlabeled probes and amplicon melting have been used to detect small deletions and single-nucleotide polymorphisms in assays where template is in abundance. Unlabeled probes have not been applied to low-level target detection, however. Herpes simplex virus (HSV) was chosen as a model to compare the unlabeled probe method to an in-house reference assay using dual-labeled, minor groove binding probes. A saturating dsDNA dye (LCGreen Plus) was used for real-time PCR. HSV-1, HSV-2, and an internal control were differentiated by PCR amplicon and unlabeled probe melting analysis after PCR. The unlabeled probe technique displayed 98% concordance with the reference assay for the detection of HSV from a variety of archived clinical samples (n = 182). HSV typing using unlabeled probes was 99% concordant (n = 104) to sequenced clinical samples and allowed for the detection of sequence polymorphisms in the amplicon and under the probe. Unlabeled probes and amplicon melting can be used to detect and genotype as few as 10 copies of target per reaction, restricted only by stochastic limitations. The use of unlabeled probes provides an attractive alternative to conventional fluorescence-labeled, probe-based assays for genotyping and detection of HSV and might be useful for other low-copy targets where typing is informative.
Scene recognition based on integrating active learning with dictionary learning
NASA Astrophysics Data System (ADS)
Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen
2018-04-01
Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.
Gerritsen, Roald; Faddegon, Hans; Dijkers, Fred; van Grootheest, Kees; van Puijenbroek, Eugène
2011-09-01
Spontaneous reporting is a cornerstone of pharmacovigilance. Unfamiliarity with the reporting of suspected adverse drug reactions (ADRs) is a major factor leading to not reporting these events. Medical education may promote more effective reporting. Numerous changes have been implemented in medical education over the last decade, with a shift in training methods from those aimed predominantly at the transfer of knowledge towards those that are more practice based and skill oriented. It is conceivable that these changes have an impact on pharmacovigilance training in vocational training programmes. Therefore, this study compares the effectiveness of a skill-oriented, practice-based pharmacovigilance training method, with a traditional, lecture-based pharmacovigilance training method in the vocational training of general practitioners (GPs). The traditional, lecture-based method is common practice in the Netherlands. The purpose of this study was to establish whether the use of a practice-based, skill-oriented method in pharmacovigilance training during GP traineeship leads to an increase of reported ADRs after completion of this traineeship, compared with a lecture-based method. We also investigated whether the applied training method has an impact on the documentation level of the reports and on the number of unlabelled events reported. A retrospective cohort study. The number of ADR reports submitted to the Netherlands Pharmacovigilance Centre Lareb (between January 2006 and October 2010) after completion of GP vocational training was compared between the two groups. Documentation level of the reports and the number of labelled/unlabelled events reported were also compared. The practice-based cohort reported 32 times after completion of training (124 subjects, 6.8 reports per 1000 months of follow-up; total follow-up of 4704 months). The lecture-based cohort reported 12 times after training (135 subjects, 2.1 reports per 1000 months of follow-up; total follow-up of 5824 months) [odds ratio 2.9; 95% CI 1.4, 6.1]. Reports from GPs with practice-based training had a better documentation grade than those from GPs with lecture-based training, and more often concerned unlabelled events. The practice-based method resulted in significantly more and better-documented reports and more often concerned unlabelled events than the lecture-based method. This effect persisted and did not appear to diminish over time.
Multimodal manifold-regularized transfer learning for MCI conversion prediction.
Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang
2015-12-01
As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.
Efficient use of unlabeled data for protein sequence classification: a comparative study.
Kuksa, Pavel; Huang, Pai-Hsi; Pavlovic, Vladimir
2009-04-29
Recent studies in computational primary protein sequence analysis have leveraged the power of unlabeled data. For example, predictive models based on string kernels trained on sequences known to belong to particular folds or superfamilies, the so-called labeled data set, can attain significantly improved accuracy if this data is supplemented with protein sequences that lack any class tags-the unlabeled data. In this study, we present a principled and biologically motivated computational framework that more effectively exploits the unlabeled data by only using the sequence regions that are more likely to be biologically relevant for better prediction accuracy. As overly-represented sequences in large uncurated databases may bias the estimation of computational models that rely on unlabeled data, we also propose a method to remove this bias and improve performance of the resulting classifiers. Combined with state-of-the-art string kernels, our proposed computational framework achieves very accurate semi-supervised protein remote fold and homology detection on three large unlabeled databases. It outperforms current state-of-the-art methods and exhibits significant reduction in running time. The unlabeled sequences used under the semi-supervised setting resemble the unpolished gemstones; when used as-is, they may carry unnecessary features and hence compromise the classification accuracy but once cut and polished, they improve the accuracy of the classifiers considerably.
Efficient use of unlabeled data for protein sequence classification: a comparative study
Kuksa, Pavel; Huang, Pai-Hsi; Pavlovic, Vladimir
2009-01-01
Background Recent studies in computational primary protein sequence analysis have leveraged the power of unlabeled data. For example, predictive models based on string kernels trained on sequences known to belong to particular folds or superfamilies, the so-called labeled data set, can attain significantly improved accuracy if this data is supplemented with protein sequences that lack any class tags–the unlabeled data. In this study, we present a principled and biologically motivated computational framework that more effectively exploits the unlabeled data by only using the sequence regions that are more likely to be biologically relevant for better prediction accuracy. As overly-represented sequences in large uncurated databases may bias the estimation of computational models that rely on unlabeled data, we also propose a method to remove this bias and improve performance of the resulting classifiers. Results Combined with state-of-the-art string kernels, our proposed computational framework achieves very accurate semi-supervised protein remote fold and homology detection on three large unlabeled databases. It outperforms current state-of-the-art methods and exhibits significant reduction in running time. Conclusion The unlabeled sequences used under the semi-supervised setting resemble the unpolished gemstones; when used as-is, they may carry unnecessary features and hence compromise the classification accuracy but once cut and polished, they improve the accuracy of the classifiers considerably. PMID:19426450
Joint learning of labels and distance metric.
Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng
2010-06-01
Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness.
Positive-unlabeled learning for disease gene identification
Yang, Peng; Li, Xiao-Li; Mei, Jian-Ping; Kwoh, Chee-Keong; Ng, See-Kiong
2012-01-01
Background: Identifying disease genes from human genome is an important but challenging task in biomedical research. Machine learning methods can be applied to discover new disease genes based on the known ones. Existing machine learning methods typically use the known disease genes as the positive training set P and the unknown genes as the negative training set N (non-disease gene set does not exist) to build classifiers to identify new disease genes from the unknown genes. However, such kind of classifiers is actually built from a noisy negative set N as there can be unknown disease genes in N itself. As a result, the classifiers do not perform as well as they could be. Result: Instead of treating the unknown genes as negative examples in N, we treat them as an unlabeled set U. We design a novel positive-unlabeled (PU) learning algorithm PUDI (PU learning for disease gene identification) to build a classifier using P and U. We first partition U into four sets, namely, reliable negative set RN, likely positive set LP, likely negative set LN and weak negative set WN. The weighted support vector machines are then used to build a multi-level classifier based on the four training sets and positive training set P to identify disease genes. Our experimental results demonstrate that our proposed PUDI algorithm outperformed the existing methods significantly. Conclusion: The proposed PUDI algorithm is able to identify disease genes more accurately by treating the unknown data more appropriately as unlabeled set U instead of negative set N. Given that many machine learning problems in biomedical research do involve positive and unlabeled data instead of negative data, it is possible that the machine learning methods for these problems can be further improved by adopting PU learning methods, as we have done here for disease gene identification. Availability and implementation: The executable program and data are available at http://www1.i2r.a-star.edu.sg/∼xlli/PUDI/PUDI.html. Contact: xlli@i2r.a-star.edu.sg or yang0293@e.ntu.edu.sg Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:22923290
Quantification of isotope-labelled and unlabelled folates in plasma, ileostomy and food samples.
Büttner, Barbara E; Öhrvik, Veronica E; Witthöft, Cornelia M; Rychlik, Michael
2011-01-01
New stable isotope dilution assays were developed for the simultaneous quantitation of [(13)C(5)]-labelled and unlabelled 5-methyltetrahydrofolic acid, 5-formyltetrahydrofolic acid, folic acid along with unlabelled tetrahydrofolic acid and 10-formylfolic acid in clinical samples deriving from human bioavailability studies, i.e. plasma, ileostomy samples, and food. The methods were based on clean-up by strong anion exchange followed by LC-MS/MS detection. Deuterated analogues of the folates were applied as the internal standards in the stable isotope dilution assays. Assay sensitivity was sufficient to detect all relevant folates in the respective samples as their limits of detection were below 0.62 nmol/L in plasma and below 0.73 μg/100 g in food or ileostomy samples. Quantification of the [(13)C(5)]-label in clinical samples offers the possibility to differentiate between folate from endogenous body pools and the administered dose when executing bioavailability trials.
An Oracle-based co-training framework for writer identification in offline handwriting
NASA Astrophysics Data System (ADS)
Porwal, Utkarsh; Rajan, Sreeranga; Govindaraju, Venu
2012-01-01
State-of-the-art techniques for writer identification have been centered primarily on enhancing the performance of the system for writer identification. Machine learning algorithms have been used extensively to improve the accuracy of such system assuming sufficient amount of data is available for training. Little attention has been paid to the prospect of harnessing the information tapped in a large amount of un-annotated data. This paper focuses on co-training based framework that can be used for iterative labeling of the unlabeled data set exploiting the independence between the multiple views (features) of the data. This paradigm relaxes the assumption of sufficiency of the data available and tries to generate labeled data from unlabeled data set along with improving the accuracy of the system. However, performance of co-training based framework is dependent on the effectiveness of the algorithm used for the selection of data points to be added in the labeled set. We propose an Oracle based approach for data selection that learns the patterns in the score distribution of classes for labeled data points and then predicts the labels (writers) of the unlabeled data point. This method for selection statistically learns the class distribution and predicts the most probable class unlike traditional selection algorithms which were based on heuristic approaches. We conducted experiments on publicly available IAM dataset and illustrate the efficacy of the proposed approach.
Computerized breast cancer analysis system using three stage semi-supervised learning method.
Sun, Wenqing; Tseng, Tzu-Liang Bill; Zhang, Jianying; Qian, Wei
2016-10-01
A large number of labeled medical image data is usually a requirement to train a well-performed computer-aided detection (CAD) system. But the process of data labeling is time consuming, and potential ethical and logistical problems may also present complications. As a result, incorporating unlabeled data into CAD system can be a feasible way to combat these obstacles. In this study we developed a three stage semi-supervised learning (SSL) scheme that combines a small amount of labeled data and larger amount of unlabeled data. The scheme was modified on our existing CAD system using the following three stages: data weighing, feature selection, and newly proposed dividing co-training data labeling algorithm. Global density asymmetry features were incorporated to the feature pool to reduce the false positive rate. Area under the curve (AUC) and accuracy were computed using 10 fold cross validation method to evaluate the performance of our CAD system. The image dataset includes mammograms from 400 women who underwent routine screening examinations, and each pair contains either two cranio-caudal (CC) or two mediolateral-oblique (MLO) view mammograms from the right and the left breasts. From these mammograms 512 regions were extracted and used in this study, and among them 90 regions were treated as labeled while the rest were treated as unlabeled. Using our proposed scheme, the highest AUC observed in our research was 0.841, which included the 90 labeled data and all the unlabeled data. It was 7.4% higher than using labeled data only. With the increasing amount of labeled data, AUC difference between using mixed data and using labeled data only reached its peak when the amount of labeled data was around 60. This study demonstrated that our proposed three stage semi-supervised learning can improve the CAD performance by incorporating unlabeled data. Using unlabeled data is promising in computerized cancer research and may have a significant impact for future CAD system applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Domain Regeneration for Cross-Database Micro-Expression Recognition
NASA Astrophysics Data System (ADS)
Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying
2018-05-01
In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.
Zhao, Mingbo; Zhang, Zhao; Chow, Tommy W S; Li, Bing
2014-07-01
Dealing with high-dimensional data has always been a major problem in research of pattern recognition and machine learning, and Linear Discriminant Analysis (LDA) is one of the most popular methods for dimension reduction. However, it only uses labeled samples while neglecting unlabeled samples, which are abundant and can be easily obtained in the real world. In this paper, we propose a new dimension reduction method, called "SL-LDA", by using unlabeled samples to enhance the performance of LDA. The new method first propagates label information from the labeled set to the unlabeled set via a label propagation process, where the predicted labels of unlabeled samples, called "soft labels", can be obtained. It then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimension reduction. In this way, the proposed method can preserve more discriminative information, which is preferable when solving the classification problem. We further propose an efficient approach for solving SL-LDA under a least squares framework, and a flexible method of SL-LDA (FSL-LDA) to better cope with datasets sampled from a nonlinear manifold. Extensive simulations are carried out on several datasets, and the results show the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
An, Le; Adeli, Ehsan; Liu, Mingxia; Zhang, Jun; Lee, Seong-Whan; Shen, Dinggang
2017-03-01
Classification is one of the most important tasks in machine learning. Due to feature redundancy or outliers in samples, using all available data for training a classifier may be suboptimal. For example, the Alzheimer’s disease (AD) is correlated with certain brain regions or single nucleotide polymorphisms (SNPs), and identification of relevant features is critical for computer-aided diagnosis. Many existing methods first select features from structural magnetic resonance imaging (MRI) or SNPs and then use those features to build the classifier. However, with the presence of many redundant features, the most discriminative features are difficult to be identified in a single step. Thus, we formulate a hierarchical feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps for improved classifier learning. To positively guide the data manifold preservation process, we utilize both labeled and unlabeled data during training, making our method semi-supervised. For validation, we conduct experiments on AD diagnosis by selecting mutually informative features from both MRI and SNP, and using the most discriminative samples for training. The superior classification results demonstrate the effectiveness of our approach, as compared with the rivals.
NASA Astrophysics Data System (ADS)
Eneva, Elena; Petrushin, Valery A.
2002-03-01
Taxonomies are valuable tools for structuring and representing our knowledge about the world. They are widely used in many domains, where information about species, products, customers, publications, etc. needs to be organized. In the absence of standards, many taxonomies of the same entities can co-exist. A problem arises when data categorized in a particular taxonomy needs to be used by a procedure (methodology or algorithm) that uses a different taxonomy. Usually, a labor-intensive manual approach is used to solve this problem. This paper describes a machine learning approach which aids domain experts in changing taxonomies. It allows learning relationships between two taxonomies and mapping the data from one taxonomy into another. The proposed approach uses decision trees and bootstrapping for learning mappings of instances from the source to the target taxonomies. A C4.5 decision tree classifier is trained on a small manually labeled training set and applied to a randomly selected sample from the unlabeled data. The classification results are analyzed and the misclassified items are corrected and all items are added to the training set. This procedure is iterated until unlabeled data is available or an acceptable error rate is reached. In the latter case the last classifier is used to label all the remaining data. We test our approach on a database of products obtained from as grocery store chain and find that it performs well, reaching 92.6% accuracy while requiring the human expert to explicitly label only 18% of the entire data.
In-situ trainable intrusion detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Symons, Christopher T.; Beaver, Justin M.; Gillen, Rob
A computer implemented method detects intrusions using a computer by analyzing network traffic. The method includes a semi-supervised learning module connected to a network node. The learning module uses labeled and unlabeled data to train a semi-supervised machine learning sensor. The method records events that include a feature set made up of unauthorized intrusions and benign computer requests. The method identifies at least some of the benign computer requests that occur during the recording of the events while treating the remainder of the data as unlabeled. The method trains the semi-supervised learning module at the network node in-situ, such thatmore » the semi-supervised learning modules may identify malicious traffic without relying on specific rules, signatures, or anomaly detection.« less
Cross-domain question classification in community question answering via kernel mapping
NASA Astrophysics Data System (ADS)
Su, Lei; Hu, Zuoliang; Yang, Bin; Li, Yiyang; Chen, Jun
2015-10-01
An increasingly popular method for retrieving information is via the community question answering (CQA) systems such as Yahoo! Answers and Baidu Knows. In CQA, question classification plays an important role to find the answers. However, the labeled training examples for statistical question classifier are fairly expensive to obtain, as they require the experienced human efforts. Meanwhile, unlabeled data are readily available. This paper employs the method of domain adaptation via kernel mapping to solve this problem. In detail, the kernel approach is utilized to map the target-domain data and the source-domain data into a common space, where the question classifiers are trained under the closer conditional probabilities. The kernel mapping function is constructed by domain knowledge. Therefore, domain knowledge could be transferred from the labeled examples in the source domain to the unlabeled ones in the targeted domain. The statistical training model can be improved by using a large number of unlabeled data. Meanwhile, the Hadoop Platform is used to construct the mapping mechanism to reduce the time complexity. Map/Reduce enable kernel mapping for domain adaptation in parallel in the Hadoop Platform. Experimental results show that the accuracy of question classification could be improved by the method of kernel mapping. Furthermore, the parallel method in the Hadoop Platform could effective schedule the computing resources to reduce the running time.
Exploiting the potential of unlabeled endoscopic video data with self-supervised learning.
Ross, Tobias; Zimmerer, David; Vemuri, Anant; Isensee, Fabian; Wiesenfarth, Manuel; Bodenstedt, Sebastian; Both, Fabian; Kessler, Philip; Wagner, Martin; Müller, Beat; Kenngott, Hannes; Speidel, Stefanie; Kopp-Schneider, Annette; Maier-Hein, Klaus; Maier-Hein, Lena
2018-06-01
Surgical data science is a new research field that aims to observe all aspects of the patient treatment process in order to provide the right assistance at the right time. Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is becoming a major bottleneck in the field. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue. Our approach is guided by the hypothesis that unlabeled video data can be used to learn a representation of the target domain that boosts the performance of state-of-the-art machine learning algorithms when used for pre-training. Core of the method is an auxiliary task based on raw endoscopic video data of the target domain that is used to initialize the convolutional neural network (CNN) for the target task. In this paper, we propose the re-colorization of medical images with a conditional generative adversarial network (cGAN)-based architecture as auxiliary task. A variant of the method involves a second pre-training step based on labeled data for the target task from a related domain. We validate both variants using medical instrument segmentation as target task. The proposed approach can be used to radically reduce the manual annotation effort involved in training CNNs. Compared to the baseline approach of generating annotated data from scratch, our method decreases exploratively the number of labeled images by up to 75% without sacrificing performance. Our method also outperforms alternative methods for CNN pre-training, such as pre-training on publicly available non-medical (COCO) or medical data (MICCAI EndoVis2017 challenge) using the target task (in this instance: segmentation). As it makes efficient use of available (non-)public and (un-)labeled data, the approach has the potential to become a valuable tool for CNN (pre-)training.
Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Huang, H.; Liu, J.; Pan, Y.
2012-07-01
The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.
Multiclass Continuous Correspondence Learning
NASA Technical Reports Server (NTRS)
Bue, Brian D,; Thompson, David R.
2011-01-01
We extend the Structural Correspondence Learning (SCL) domain adaptation algorithm of Blitzer er al. to the realm of continuous signals. Given a set of labeled examples belonging to a 'source' domain, we select a set of unlabeled examples in a related 'target' domain that play similar roles in both domains. Using these 'pivot samples, we map both domains into a common feature space, allowing us to adapt a classifier trained on source examples to classify target examples. We show that when between-class distances are relatively preserved across domains, we can automatically select target pivots to bring the domains into correspondence.
A method for named entity normalization in biomedical articles: application to diseases and plants.
Cho, Hyejin; Choi, Wonjun; Lee, Hyunju
2017-10-13
In biomedical articles, a named entity recognition (NER) technique that identifies entity names from texts is an important element for extracting biological knowledge from articles. After NER is applied to articles, the next step is to normalize the identified names into standard concepts (i.e., disease names are mapped to the National Library of Medicine's Medical Subject Headings disease terms). In biomedical articles, many entity normalization methods rely on domain-specific dictionaries for resolving synonyms and abbreviations. However, the dictionaries are not comprehensive except for some entities such as genes. In recent years, biomedical articles have accumulated rapidly, and neural network-based algorithms that incorporate a large amount of unlabeled data have shown considerable success in several natural language processing problems. In this study, we propose an approach for normalizing biological entities, such as disease names and plant names, by using word embeddings to represent semantic spaces. For diseases, training data from the National Center for Biotechnology Information (NCBI) disease corpus and unlabeled data from PubMed abstracts were used to construct word representations. For plants, a training corpus that we manually constructed and unlabeled PubMed abstracts were used to represent word vectors. We showed that the proposed approach performed better than the use of only the training corpus or only the unlabeled data and showed that the normalization accuracy was improved by using our model even when the dictionaries were not comprehensive. We obtained F-scores of 0.808 and 0.690 for normalizing the NCBI disease corpus and manually constructed plant corpus, respectively. We further evaluated our approach using a data set in the disease normalization task of the BioCreative V challenge. When only the disease corpus was used as a dictionary, our approach significantly outperformed the best system of the task. The proposed approach shows robust performance for normalizing biological entities. The manually constructed plant corpus and the proposed model are available at http://gcancer.org/plant and http://gcancer.org/normalization , respectively.
NASA Astrophysics Data System (ADS)
Gao, Yuan; Ma, Jiayi; Yuille, Alan L.
2017-05-01
This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.
Convex formulation of multiple instance learning from positive and unlabeled bags.
Bao, Han; Sakai, Tomoya; Sato, Issei; Sugiyama, Masashi
2018-05-24
Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available. MIL has a variety of applications such as content-based image retrieval, text categorization, and medical diagnosis. Most of the previous work for MIL assume that training bags are fully labeled. However, it is often difficult to obtain an enough number of labeled bags in practical situations, while many unlabeled bags are available. A learning framework called PU classification (positive and unlabeled classification) can address this problem. In this paper, we propose a convex PU classification method to solve an MIL problem. We experimentally show that the proposed method achieves better performance with significantly lower computation costs than an existing method for PU-MIL. Copyright © 2018 Elsevier Ltd. All rights reserved.
Semi-supervised Learning for Phenotyping Tasks.
Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K
2015-01-01
Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.
Ensemble positive unlabeled learning for disease gene identification.
Yang, Peng; Li, Xiaoli; Chua, Hon-Nian; Kwoh, Chee-Keong; Ng, See-Kiong
2014-01-01
An increasing number of genes have been experimentally confirmed in recent years as causative genes to various human diseases. The newly available knowledge can be exploited by machine learning methods to discover additional unknown genes that are likely to be associated with diseases. In particular, positive unlabeled learning (PU learning) methods, which require only a positive training set P (confirmed disease genes) and an unlabeled set U (the unknown candidate genes) instead of a negative training set N, have been shown to be effective in uncovering new disease genes in the current scenario. Using only a single source of data for prediction can be susceptible to bias due to incompleteness and noise in the genomic data and a single machine learning predictor prone to bias caused by inherent limitations of individual methods. In this paper, we propose an effective PU learning framework that integrates multiple biological data sources and an ensemble of powerful machine learning classifiers for disease gene identification. Our proposed method integrates data from multiple biological sources for training PU learning classifiers. A novel ensemble-based PU learning method EPU is then used to integrate multiple PU learning classifiers to achieve accurate and robust disease gene predictions. Our evaluation experiments across six disease groups showed that EPU achieved significantly better results compared with various state-of-the-art prediction methods as well as ensemble learning classifiers. Through integrating multiple biological data sources for training and the outputs of an ensemble of PU learning classifiers for prediction, we are able to minimize the potential bias and errors in individual data sources and machine learning algorithms to achieve more accurate and robust disease gene predictions. In the future, our EPU method provides an effective framework to integrate the additional biological and computational resources for better disease gene predictions.
The UXO Classification Demonstration at San Luis Obispo, CA
2010-09-01
Set ................................45 2.17.2 Active Learning Training and Test Set ..........................................47 2.17.3 Extended...optimized algorithm by applying it to only the unlabeled data in the test set. 2.17.2 Active Learning Training and Test Set SIG also used active ... learning [12]. Active learning , an alternative approach for constructing a training set, is used in conjunction with either supervised or semi
A Deep Learning Approach to LIBS Spectroscopy for Planetary Applications
NASA Astrophysics Data System (ADS)
Mullen, T. H.; Parente, M.; Gemp, I.; Dyar, M. D.
2017-12-01
The ChemCam instrument on the Curiousity rover has collected >440,000 laser-induced breakdown spectra (LIBS) from 1500 different geological targets since 2012. The team is using a pipeline of preprocessing and partial least squares techniques to predict compositions of surface materials [1]. Unfortunately, such multivariate techniques are plagued by hard-to-meet assumptions involving constant hyperparameter tuning to specific elements and the amount of training data available; if the whole distribution of data is not seen, the method will overfit to the training data and generalizability will suffer. The rover only has 10 calibration targets on-board that represent a small subset of the geochemical samples the rover is expected to investigate. Deep neural networks have been used to bypass these issues in other fields. Semi-supervised techniques allow researchers to utilized small labeled datasets and vast amounts of unlabeled data. One example is the variational autoencoder model, a semi-supervised generative model in the form of a deep neural network. The autoencoder assumes that LIBS spectra are generated from a distribution conditioned on the elemental compositions in the sample and some nuisance. The system is broken into two models: one that predicts elemental composition from the spectra and one that generates spectra from compositions that may or may not be seen in the training set. The synthesized spectra show strong agreement with geochemical conventions to express specific compositions. The predictions of composition show improved generalizability to PLS. Deep neural networks have also been used to transfer knowledge from one dataset to another to solve unlabeled data problems. Given that vast amounts of laboratry LIBS spectra have been obtained in the past few years, it is now feasible train a deep net to predict elemental composition from lab spectra. Transfer learning (manifold alignment or calibration transfer) [2] is then used to fine-tune the model from terrestrial lab data to Martian field data. Neural networks and generative models provide the flexibility need for elemental composition prediction and unseen spectra synthesis. [1] Clegg S. et al. (2016) Spectrochim. Acta B, 129, 64-85. [2] Boucher T. et al. (2017) J. Chemom., 31, e2877.
Sun, Junying; Bingga, Gali; Liu, Zhicheng; Zhang, Chunhong; Shen, Haiyan; Guo, Pengju; Zhang, Jianfeng
2018-06-01
Differentiation of classical strains and highly pathogenic strains of porcine reproductive and respiratory syndrome virus is crucial for effective vaccination programs and epidemiological studies. We used nested PCR and high resolution melting curve analysis with unlabeled probe to distinguish between the classical and the highly pathogenic strains of this virus. Two sets of primers and a 20 bp unlabeled probe were designed from the NSP3 gene. The unlabeled probe included two mutations specific for the classical and highly pathogenic strains of the virus. An additional primer set from the NSP2 gene of the highly pathogenic vaccine strain JXA1-R was used to detect its exclusive single nucleotide polymorphism. We tested 107 clinical samples, 21 clinical samples were positive for PRRSV (consistent with conventional PCR assay), among them four were positive for the classical strain with the remainder 17 for the highly pathogenic strain. Around 10 °C difference between probe melting temperatures showed the high discriminatory power of this method. Among highly pathogenic positive samples, three samples were determined as positive for JXA1-R vaccine-related strain with a 95% genotype confidence percentage. All these genotyping results using the high resolution melting curve assay were confirmed with DNA sequencing. This unlabeled probe method provides an alternative means to differentiate the classical strains from the highly pathogenic porcine reproductive and respiratory syndrome virus strains rapidly and accurately. Copyright © 2018. Published by Elsevier Ltd.
Bidirectional Active Learning: A Two-Way Exploration Into Unlabeled and Labeled Data Set.
Zhang, Xiao-Yu; Wang, Shupeng; Yun, Xiaochun
2015-12-01
In practical machine learning applications, human instruction is indispensable for model construction. To utilize the precious labeling effort effectively, active learning queries the user with selective sampling in an interactive way. Traditional active learning techniques merely focus on the unlabeled data set under a unidirectional exploration framework and suffer from model deterioration in the presence of noise. To address this problem, this paper proposes a novel bidirectional active learning algorithm that explores into both unlabeled and labeled data sets simultaneously in a two-way process. For the acquisition of new knowledge, forward learning queries the most informative instances from unlabeled data set. For the introspection of learned knowledge, backward learning detects the most suspiciously unreliable instances within the labeled data set. Under the two-way exploration framework, the generalization ability of the learning model can be greatly improved, which is demonstrated by the encouraging experimental results.
SELF-BLM: Prediction of drug-target interactions via self-training SVM.
Keum, Jongsoo; Nam, Hojung
2017-01-01
Predicting drug-target interactions is important for the development of novel drugs and the repositioning of drugs. To predict such interactions, there are a number of methods based on drug and target protein similarity. Although these methods, such as the bipartite local model (BLM), show promise, they often categorize unknown interactions as negative interaction. Therefore, these methods are not ideal for finding potential drug-target interactions that have not yet been validated as positive interactions. Thus, here we propose a method that integrates machine learning techniques, such as self-training support vector machine (SVM) and BLM, to develop a self-training bipartite local model (SELF-BLM) that facilitates the identification of potential interactions. The method first categorizes unlabeled interactions and negative interactions among unknown interactions using a clustering method. Then, using the BLM method and self-training SVM, the unlabeled interactions are self-trained and final local classification models are constructed. When applied to four classes of proteins that include enzymes, G-protein coupled receptors (GPCRs), ion channels, and nuclear receptors, SELF-BLM showed the best performance for predicting not only known interactions but also potential interactions in three protein classes compare to other related studies. The implemented software and supporting data are available at https://github.com/GIST-CSBL/SELF-BLM.
Semi-supervised prediction of gene regulatory networks using machine learning algorithms.
Patel, Nihir; Wang, Jason T L
2015-10-01
Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely, support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabelled data for training. We investigated inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabelled data. We then applied our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluated the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.
Self-Taught Learning Based on Sparse Autoencoder for E-Nose in Wound Infection Detection
He, Peilin; Jia, Pengfei; Qiao, Siqi; Duan, Shukai
2017-01-01
For an electronic nose (E-nose) in wound infection distinguishing, traditional learning methods have always needed large quantities of labeled wound infection samples, which are both limited and expensive; thus, we introduce self-taught learning combined with sparse autoencoder and radial basis function (RBF) into the field. Self-taught learning is a kind of transfer learning that can transfer knowledge from other fields to target fields, can solve such problems that labeled data (target fields) and unlabeled data (other fields) do not share the same class labels, even if they are from entirely different distribution. In our paper, we obtain numerous cheap unlabeled pollutant gas samples (benzene, formaldehyde, acetone and ethylalcohol); however, labeled wound infection samples are hard to gain. Thus, we pose self-taught learning to utilize these gas samples, obtaining a basis vector θ. Then, using the basis vector θ, we reconstruct the new representation of wound infection samples under sparsity constraint, which is the input of classifiers. We compare RBF with partial least squares discriminant analysis (PLSDA), and reach a conclusion that the performance of RBF is superior to others. We also change the dimension of our data set and the quantity of unlabeled data to search the input matrix that produces the highest accuracy. PMID:28991154
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Learning viewpoint invariant object representations using a temporal coherence principle.
Einhäuser, Wolfgang; Hipp, Jörg; Eggert, Julian; Körner, Edgar; König, Peter
2005-07-01
Invariant object recognition is arguably one of the major challenges for contemporary machine vision systems. In contrast, the mammalian visual system performs this task virtually effortlessly. How can we exploit our knowledge on the biological system to improve artificial systems? Our understanding of the mammalian early visual system has been augmented by the discovery that general coding principles could explain many aspects of neuronal response properties. How can such schemes be transferred to system level performance? In the present study we train cells on a particular variant of the general principle of temporal coherence, the "stability" objective. These cells are trained on unlabeled real-world images without a teaching signal. We show that after training, the cells form a representation that is largely independent of the viewpoint from which the stimulus is looked at. This finding includes generalization to previously unseen viewpoints. The achieved representation is better suited for view-point invariant object classification than the cells' input patterns. This property to facilitate view-point invariant classification is maintained even if training and classification take place in the presence of an--also unlabeled--distractor object. In summary, here we show that unsupervised learning using a general coding principle facilitates the classification of real-world objects, that are not segmented from the background and undergo complex, non-isomorphic, transformations.
Maximum margin semi-supervised learning with irrelevant data.
Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R
2015-10-01
Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.
Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan
2018-04-01
The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have similar levels of performance in the remaining aspects.
Leijten, Patty; Thomaes, Sander; Orobio de Castro, Bram; Dishion, Thomas J; Matthys, Walter
2016-12-01
There is a need to identify the "effective ingredients" of evidence-based behavior therapies. We tested the effects of one of the most common ingredients in parenting interventions for preventing disruptive child behavior, referred to as labeled praise (e.g., "well done picking up your toys"), which is typically recommended in preference to unlabeled praise (e.g., "well done"). We compared the effects of labeled praise, unlabeled praise, and no praise on child compliance in two experiments. Experiment 1 included 161 4 to 8 year-old community sample children and tested immediate effects of praise. Experiment 2 included 132 3 to 9 year-old children with varying levels of disruptive behavior and tested immediate and two-week effects of praise. In Experiment 1, teaching parents to use labeled praise did not increase immediate child compliance, whereas teaching them to use unlabeled praise did. In Experiment 2, teaching parents to use labeled praise for two weeks reduced disruptive child behavior, but this effect was of a similar magnitude to that for unlabeled praise. Parents preferred the use of unlabeled over labeled praise. These findings suggest that parental praise promotes child compliance, but the addition of labeling the specific positive behavior may not be of incremental value. Copyright © 2016 Elsevier Ltd. All rights reserved.
Boxcar detection for high-frequency modulation in stimulated Raman scattering microscopy
NASA Astrophysics Data System (ADS)
Fimpel, P.; Riek, C.; Ebner, L.; Leitenstorfer, A.; Brida, D.; Zumbusch, A.
2018-04-01
Stimulated Raman scattering (SRS) microscopy is an important non-linear optical technique for the investigation of unlabeled samples. The SRS signal manifests itself as a small intensity exchange between the laser pulses involved in coherent excitation of Raman modes. Usually, high-frequency modulation is applied in one pulse train, and the signal is then detected on the other pulse train via lock-in amplification. While allowing shot-noise limited detection sensitivity, lock-in detection, which corresponds to filtering the signal in the frequency domain, is not the most efficient way of using the excitation light. In this manuscript, we show that boxcar averaging, which is equivalent to temporal filtering, is better suited for the detection of low-duty-cycle signals as encountered in SRS microscopy. We demonstrate that by employing suitable gating windows, the signal-to-noise ratios achievable with lock-in detection can be realized in shorter time with boxcar averaging. Therefore, high-quality images are recorded at a faster rate and lower irradiance which is an important factor, e.g., for minimizing degradation of biological samples.
Named Entity Recognition in Chinese Clinical Text Using Deep Neural Network.
Wu, Yonghui; Jiang, Min; Lei, Jianbo; Xu, Hua
2015-01-01
Rapid growth in electronic health records (EHRs) use has led to an unprecedented expansion of available clinical data in electronic formats. However, much of the important healthcare information is locked in the narrative documents. Therefore Natural Language Processing (NLP) technologies, e.g., Named Entity Recognition that identifies boundaries and types of entities, has been extensively studied to unlock important clinical information in free text. In this study, we investigated a novel deep learning method to recognize clinical entities in Chinese clinical documents using the minimal feature engineering approach. We developed a deep neural network (DNN) to generate word embeddings from a large unlabeled corpus through unsupervised learning and another DNN for the NER task. The experiment results showed that the DNN with word embeddings trained from the large unlabeled corpus outperformed the state-of-the-art CRF's model in the minimal feature engineering setting, achieving the highest F1-score of 0.9280. Further analysis showed that word embeddings derived through unsupervised learning from large unlabeled corpus remarkably improved the DNN with randomized embedding, denoting the usefulness of unsupervised feature learning.
Application of Machine Learning in Urban Greenery Land Cover Extraction
NASA Astrophysics Data System (ADS)
Qiao, X.; Li, L. L.; Li, D.; Gan, Y. L.; Hou, A. Y.
2018-04-01
Urban greenery is a critical part of the modern city and the greenery coverage information is essential for land resource management, environmental monitoring and urban planning. It is a challenging work to extract the urban greenery information from remote sensing image as the trees and grassland are mixed with city built-ups. In this paper, we propose a new automatic pixel-based greenery extraction method using multispectral remote sensing images. The method includes three main steps. First, a small part of the images is manually interpreted to provide prior knowledge. Secondly, a five-layer neural network is trained and optimised with the manual extraction results, which are divided to serve as training samples, verification samples and testing samples. Lastly, the well-trained neural network will be applied to the unlabelled data to perform the greenery extraction. The GF-2 and GJ-1 high resolution multispectral remote sensing images were used to extract greenery coverage information in the built-up areas of city X. It shows a favourable performance in the 619 square kilometers areas. Also, when comparing with the traditional NDVI method, the proposed method gives a more accurate delineation of the greenery region. Due to the advantage of low computational load and high accuracy, it has a great potential for large area greenery auto extraction, which saves a lot of manpower and resources.
Unsupervised classification of variable stars
NASA Astrophysics Data System (ADS)
Valenzuela, Lucas; Pichara, Karim
2018-03-01
During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.
Perceptron ensemble of graph-based positive-unlabeled learning for disease gene identification.
Jowkar, Gholam-Hossein; Mansoori, Eghbal G
2016-10-01
Identification of disease genes, using computational methods, is an important issue in biomedical and bioinformatics research. According to observations that diseases with the same or similar phenotype have the same biological characteristics, researchers have tried to identify genes by using machine learning tools. In recent attempts, some semi-supervised learning methods, called positive-unlabeled learning, is used for disease gene identification. In this paper, we present a Perceptron ensemble of graph-based positive-unlabeled learning (PEGPUL) on three types of biological attributes: gene ontologies, protein domains and protein-protein interaction networks. In our method, a reliable set of positive and negative genes are extracted using co-training schema. Then, the similarity graph of genes is built using metric learning by concentrating on multi-rank-walk method to perform inference from labeled genes. At last, a Perceptron ensemble is learned from three weighted classifiers: multilevel support vector machine, k-nearest neighbor and decision tree. The main contributions of this paper are: (i) incorporating the statistical properties of gene data through choosing proper metrics, (ii) statistical evaluation of biological features, and (iii) noise robustness characteristic of PEGPUL via using multilevel schema. In order to assess PEGPUL, we have applied it on 12950 disease genes with 949 positive genes from six class of diseases and 12001 unlabeled genes. Compared with some popular disease gene identification methods, the experimental results show that PEGPUL has reasonable performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Image classification of unlabeled malaria parasites in red blood cells.
Zheng Zhang; Ong, L L Sharon; Kong Fang; Matthew, Athul; Dauwels, Justin; Ming Dao; Asada, Harry
2016-08-01
This paper presents a method to detect unlabeled malaria parasites in red blood cells. The current "gold standard" for malaria diagnosis is microscopic examination of thick blood smear, a time consuming process requiring extensive training. Our goal is to develop an automate process to identify malaria infected red blood cells. Major issues in automated analysis of microscopy images of unstained blood smears include overlapping cells and oddly shaped cells. Our approach creates robust templates to detect infected and uninfected red cells. Histogram of Oriented Gradients (HOGs) features are extracted from templates and used to train a classifier offline. Next, the ViolaJones object detection framework is applied to detect infected and uninfected red cells and the image background. Results show our approach out-performs classification approaches with PCA features by 50% and cell detection algorithms applying Hough transforms by 24%. Majority of related work are designed to automatically detect stained parasites in blood smears where the cells are fixed. Although it is more challenging to design algorithms for unstained parasites, our methods will allow analysis of parasite progression in live cells under different drug treatments.
An Exemplar-Based Multi-View Domain Generalization Framework for Visual Recognition.
Niu, Li; Li, Wen; Xu, Dong; Cai, Jianfei
2018-02-01
In this paper, we propose a new exemplar-based multi-view domain generalization (EMVDG) framework for visual recognition by learning robust classifier that are able to generalize well to arbitrary target domain based on the training samples with multiple types of features (i.e., multi-view features). In this framework, we aim to address two issues simultaneously. First, the distribution of training samples (i.e., the source domain) is often considerably different from that of testing samples (i.e., the target domain), so the performance of the classifiers learnt on the source domain may drop significantly on the target domain. Moreover, the testing data are often unseen during the training procedure. Second, when the training data are associated with multi-view features, the recognition performance can be further improved by exploiting the relation among multiple types of features. To address the first issue, considering that it has been shown that fusing multiple SVM classifiers can enhance the domain generalization ability, we build our EMVDG framework upon exemplar SVMs (ESVMs), in which a set of ESVM classifiers are learnt with each one trained based on one positive training sample and all the negative training samples. When the source domain contains multiple latent domains, the learnt ESVM classifiers are expected to be grouped into multiple clusters. To address the second issue, we propose two approaches under the EMVDG framework based on the consensus principle and the complementary principle, respectively. Specifically, we propose an EMVDG_CO method by adding a co-regularizer to enforce the cluster structures of ESVM classifiers on different views to be consistent based on the consensus principle. Inspired by multiple kernel learning, we also propose another EMVDG_MK method by fusing the ESVM classifiers from different views based on the complementary principle. In addition, we further extend our EMVDG framework to exemplar-based multi-view domain adaptation (EMVDA) framework when the unlabeled target domain data are available during the training procedure. The effectiveness of our EMVDG and EMVDA frameworks for visual recognition is clearly demonstrated by comprehensive experiments on three benchmark data sets.
Wu, Zhiyuan; Yuan, Hong; Zhang, Xinju; Liu, Weiwei; Xu, Jinhua; Zhang, Wei; Guan, Ming
2011-01-01
JAK2 V617F, a somatic point mutation that leads to constitutive JAK2 phosphorylation and kinase activation, has been incorporated into the WHO classification and diagnostic criteria of myeloid neoplasms. Although various approaches such as restriction fragment length polymorphism, amplification refractory mutation system and real-time PCR have been developed for its detection, a generic rapid closed-tube method, which can be utilized on routine genetic testing instruments with stability and cost-efficiency, has not been described. Asymmetric PCR for detection of JAK2 V617F with a 3'-blocked unlabeled probe, saturate dye and subsequent melting curve analysis was performed on a Rotor-Gene® Q real-time cycler to establish the methodology. We compared this method to the existing amplification refractory mutation systems and direct sequencing. Hereafter, the broad applicability of this unlabeled probe melting method was also validated on three diverse real-time systems (Roche LightCycler® 480, Applied Biosystems ABI® 7500 and Eppendorf Mastercycler® ep realplex) in two different laboratories. The unlabeled probe melting analysis could genotype JAK2 V617F mutation explicitly with a 3% mutation load detecting sensitivity. At level of 5% mutation load, the intra- and inter-assay CVs of probe-DNA heteroduplex (mutation/wild type) covered 3.14%/3.55% and 1.72%/1.29% respectively. The method could equally discriminate mutant from wild type samples on the other three real-time instruments. With a high detecting sensitivity, unlabeled probe melting curve analysis is more applicable to disclose JAK2 V617F mutation than conventional methodologies. Verified with the favorable inter- and intra-assay reproducibility, unlabeled probe melting analysis provided a generic mutation detecting alternative for real-time instruments.
Zhao, Xiaowei; Ning, Qiao; Chai, Haiting; Ma, Zhiqiang
2015-06-07
As a widespread type of protein post-translational modifications (PTMs), succinylation plays an important role in regulating protein conformation, function and physicochemical properties. Compared with the labor-intensive and time-consuming experimental approaches, computational predictions of succinylation sites are much desirable due to their convenient and fast speed. Currently, numerous computational models have been developed to identify PTMs sites through various types of two-class machine learning algorithms. These methods require both positive and negative samples for training. However, designation of the negative samples of PTMs was difficult and if it is not properly done can affect the performance of computational models dramatically. So that in this work, we implemented the first application of positive samples only learning (PSoL) algorithm to succinylation sites prediction problem, which was a special class of semi-supervised machine learning that used positive samples and unlabeled samples to train the model. Meanwhile, we proposed a novel succinylation sites computational predictor called SucPred (succinylation site predictor) by using multiple feature encoding schemes. Promising results were obtained by the SucPred predictor with an accuracy of 88.65% using 5-fold cross validation on the training dataset and an accuracy of 84.40% on the independent testing dataset, which demonstrated that the positive samples only learning algorithm presented here was particularly useful for identification of protein succinylation sites. Besides, the positive samples only learning algorithm can be applied to build predictors for other types of PTMs sites with ease. A web server for predicting succinylation sites was developed and was freely accessible at http://59.73.198.144:8088/SucPred/. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quasi-Supervised Scoring of Human Sleep in Polysomnograms Using Augmented Input Variables
Yaghouby, Farid; Sunderam, Sridhar
2015-01-01
The limitations of manual sleep scoring make computerized methods highly desirable. Scoring errors can arise from human rater uncertainty or inter-rater variability. Sleep scoring algorithms either come as supervised classifiers that need scored samples of each state to be trained, or as unsupervised classifiers that use heuristics or structural clues in unscored data to define states. We propose a quasi-supervised classifier that models observations in an unsupervised manner but mimics a human rater wherever training scores are available. EEG, EMG, and EOG features were extracted in 30s epochs from human-scored polysomnograms recorded from 42 healthy human subjects (18 to 79 years) and archived in an anonymized, publicly accessible database. Hypnograms were modified so that: 1. Some states are scored but not others; 2. Samples of all states are scored but not for transitional epochs; and 3. Two raters with 67% agreement are simulated. A framework for quasi-supervised classification was devised in which unsupervised statistical models—specifically Gaussian mixtures and hidden Markov models—are estimated from unlabeled training data, but the training samples are augmented with variables whose values depend on available scores. Classifiers were fitted to signal features incorporating partial scores, and used to predict scores for complete recordings. Performance was assessed using Cohen's K statistic. The quasi-supervised classifier performed significantly better than an unsupervised model and sometimes as well as a completely supervised model despite receiving only partial scores. The quasi-supervised algorithm addresses the need for classifiers that mimic scoring patterns of human raters while compensating for their limitations. PMID:25679475
Quasi-supervised scoring of human sleep in polysomnograms using augmented input variables.
Yaghouby, Farid; Sunderam, Sridhar
2015-04-01
The limitations of manual sleep scoring make computerized methods highly desirable. Scoring errors can arise from human rater uncertainty or inter-rater variability. Sleep scoring algorithms either come as supervised classifiers that need scored samples of each state to be trained, or as unsupervised classifiers that use heuristics or structural clues in unscored data to define states. We propose a quasi-supervised classifier that models observations in an unsupervised manner but mimics a human rater wherever training scores are available. EEG, EMG, and EOG features were extracted in 30s epochs from human-scored polysomnograms recorded from 42 healthy human subjects (18-79 years) and archived in an anonymized, publicly accessible database. Hypnograms were modified so that: 1. Some states are scored but not others; 2. Samples of all states are scored but not for transitional epochs; and 3. Two raters with 67% agreement are simulated. A framework for quasi-supervised classification was devised in which unsupervised statistical models-specifically Gaussian mixtures and hidden Markov models--are estimated from unlabeled training data, but the training samples are augmented with variables whose values depend on available scores. Classifiers were fitted to signal features incorporating partial scores, and used to predict scores for complete recordings. Performance was assessed using Cohen's Κ statistic. The quasi-supervised classifier performed significantly better than an unsupervised model and sometimes as well as a completely supervised model despite receiving only partial scores. The quasi-supervised algorithm addresses the need for classifiers that mimic scoring patterns of human raters while compensating for their limitations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Co-Labeling for Multi-View Weakly Labeled Learning.
Xu, Xinxing; Li, Wen; Xu, Dong; Tsang, Ivor W
2016-06-01
It is often expensive and time consuming to collect labeled training samples in many real-world applications. To reduce human effort on annotating training samples, many machine learning techniques (e.g., semi-supervised learning (SSL), multi-instance learning (MIL), etc.) have been studied to exploit weakly labeled training samples. Meanwhile, when the training data is represented with multiple types of features, many multi-view learning methods have shown that classifiers trained on different views can help each other to better utilize the unlabeled training samples for the SSL task. In this paper, we study a new learning problem called multi-view weakly labeled learning, in which we aim to develop a unified approach to learn robust classifiers by effectively utilizing different types of weakly labeled multi-view data from a broad range of tasks including SSL, MIL and relative outlier detection (ROD). We propose an effective approach called co-labeling to solve the multi-view weakly labeled learning problem. Specifically, we model the learning problem on each view as a weakly labeled learning problem, which aims to learn an optimal classifier from a set of pseudo-label vectors generated by using the classifiers trained from other views. Unlike traditional co-training approaches using a single pseudo-label vector for training each classifier, our co-labeling approach explores different strategies to utilize the predictions from different views, biases and iterations for generating the pseudo-label vectors, making our approach more robust for real-world applications. Moreover, to further improve the weakly labeled learning on each view, we also exploit the inherent group structure in the pseudo-label vectors generated from different strategies, which leads to a new multi-layer multiple kernel learning problem. Promising results for text-based image retrieval on the NUS-WIDE dataset as well as news classification and text categorization on several real-world multi-view datasets clearly demonstrate that our proposed co-labeling approach achieves state-of-the-art performance for various multi-view weakly labeled learning problems including multi-view SSL, multi-view MIL and multi-view ROD.
Exploring Representativeness and Informativeness for Active Learning.
Du, Bo; Wang, Zengmao; Zhang, Lefei; Zhang, Liangpei; Liu, Wei; Shen, Jialie; Tao, Dacheng
2017-01-01
How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-second-best strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms.
NASA Astrophysics Data System (ADS)
Sánchez, Clara I.; Niemeijer, Meindert; Kockelkorn, Thessa; Abràmoff, Michael D.; van Ginneken, Bram
2009-02-01
Computer-aided Diagnosis (CAD) systems for the automatic identification of abnormalities in retinal images are gaining importance in diabetic retinopathy screening programs. A huge amount of retinal images are collected during these programs and they provide a starting point for the design of machine learning algorithms. However, manual annotations of retinal images are scarce and expensive to obtain. This paper proposes a dynamic CAD system based on active learning for the automatic identification of hard exudates, cotton wool spots and drusen in retinal images. An uncertainty sampling method is applied to select samples that need to be labeled by an expert from an unlabeled set of 4000 retinal images. It reduces the number of training samples needed to obtain an optimum accuracy by dynamically selecting the most informative samples. Results show that the proposed method increases the classification accuracy compared to alternative techniques, achieving an area under the ROC curve of 0.87, 0.82 and 0.78 for the detection of hard exudates, cotton wool spots and drusen, respectively.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation.
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-04-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers.
Adaptation for Regularization Operators in Learning Theory
2006-09-10
v m), with the training sets z̃m composed of m la- belled examples and m̃m−m ≥ 0 unlabelled examples, and zvm the validation sets composed by mvm = ω...which belongs to Λm, fulfills the assumptions (6) and (7). Hence, using the assumption on mvm , we get that for every δ ∈ (0, 1), with probability greater
Wu, Jiong; Zhou, Yan; Zhang, Chun-Yan; Song, Bin-Bin; Wang, Bei-Li; Pan, Bai-Shen; Lou, Wen-Hui; Guo, Wei
2014-01-01
The aim of our study was to establish COLD-PCR combined with an unlabeled-probe HRM approach for detecting KRAS codon 12 and 13 mutations in plasma-circulating DNA of pancreatic adenocarcinoma (PA) cases as a novel and effective diagnostic technique. We tested the sensitivity and specificity of this approach with dilutions of known mutated cell lines. We screened 36 plasma-circulating DNA samples, 24 from the disease control group and 25 of a healthy group, to be subsequently sequenced to confirm mutations. Simultaneously, we tested the specimens using conventional PCR followed by HRM and then used target-DNA cloning and sequencing for verification. The ROC and respective AUC were calculated for KRAS mutations and/or serum CA 19-9. It was found that the sensitivity of Sanger reached 0.5% with COLD- PCR, whereas that obtained after conventional PCR did 20%; that of COLD-PCR based on unlabeled-probe HRM, 0.1%. KRAS mutations were identified in 26 of 36 PA cases (72.2%), while none were detected in the disease control and/or healthy group. KRAS mutations were identified both in 26 PA tissues and plasma samples. The AUC of COLD-PCR based unlabeled probe HRM turned out to be 0.861, which when combined with CA 19-9 increased to 0.934. It was concluded that COLD-PCR with unlabeled-probe HRM can be a sensitive and accurate screening technique to detect KRAS codon 12 and 13 mutations in plasma-circulating DNA for diagnosing and treating PA.
Stanescu, Ana; Caragea, Doina
2015-01-01
Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework.
2015-01-01
Background Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Results Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. Conclusions In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework. PMID:26356316
CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arendt, Dustin L.; Komurlu, Caner; Blaha, Leslie M.
We developed CHISSL, a human-machine interface that utilizes supervised machine learning in an unsupervised context to help the user group unlabeled instances by her own mental model. The user primarily interacts via correction (moving a misplaced instance into its correct group) or confirmation (accepting that an instance is placed in its correct group). Concurrent with the user's interactions, CHISSL trains a classification model guided by the user's grouping of the data. It then predicts the group of unlabeled instances and arranges some of these alongside the instances manually organized by the user. We hypothesize that this mode of human andmore » machine collaboration is more effective than Active Learning, wherein the machine decides for itself which instances should be labeled by the user. We found supporting evidence for this hypothesis in a pilot study where we applied CHISSL to organize a collection of handwritten digits.« less
Detecting and preventing error propagation via competitive learning.
Silva, Thiago Christiano; Zhao, Liang
2013-05-01
Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-01-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers. PMID:24098863
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning
2008-01-01
active learning framework for SVM-based and boosting-based rank learning. Our approach suggests sampling based on maximizing the estimated loss differential over unlabeled data. Experimental results on two benchmark corpora show that the proposed model substantially reduces the labeling effort, and achieves superior performance rapidly with as much as 30% relative improvement over the margin-based sampling
Semi-supervised SVM for individual tree crown species classification
NASA Astrophysics Data System (ADS)
Dalponte, Michele; Ene, Liviu Theodor; Marconcini, Mattia; Gobakken, Terje; Næsset, Erik
2015-12-01
In this paper a novel semi-supervised SVM classifier is presented, specifically developed for tree species classification at individual tree crown (ITC) level. In ITC tree species classification, all the pixels belonging to an ITC should have the same label. This assumption is used in the learning of the proposed semi-supervised SVM classifier (ITC-S3VM). This method exploits the information contained in the unlabeled ITC samples in order to improve the classification accuracy of a standard SVM. The ITC-S3VM method can be easily implemented using freely available software libraries. The datasets used in this study include hyperspectral imagery and laser scanning data acquired over two boreal forest areas characterized by the presence of three information classes (Pine, Spruce, and Broadleaves). The experimental results quantify the effectiveness of the proposed approach, which provides classification accuracies significantly higher (from 2% to above 27%) than those obtained by the standard supervised SVM and by a state-of-the-art semi-supervised SVM (S3VM). Particularly, by reducing the number of training samples (i.e. from 100% to 25%, and from 100% to 5% for the two datasets, respectively) the proposed method still exhibits results comparable to the ones of a supervised SVM trained with the full available training set. This property of the method makes it particularly suitable for practical forest inventory applications in which collection of in situ information can be very expensive both in terms of cost and time.
Receptor Subtype Alterations: Bases of Neuronal Plasticity and Learning
1990-12-18
oxotremorine -M binding in rabbit anterior thalamus and cingulate cortex increased during the course of discriminative avoidance conditioning (DAC). Since there...anterior thalamus between training-induced neuronal plasticities and changes in oxotremorine -M binding. 3) The concentrations of noradrenaline, serotonin...binding protocols included the following: M 1, 3H-pirenzepine; M 2, 3H- oxotremorine -M in the p resence of unlabeled pirenzepine; GABAA, 3H-muscimol; M, and
Network-based stochastic semisupervised learning.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Antineoplastic agents: comparing off-label uses among authoritative drug compendia.
Thompson, D F; Keefe, C C
1993-07-01
Unlabeled indications for antineoplastic drugs listed in the American Hospital Formulary-Drug Information, United States Pharmacopeia Dispensing Information-Drug Information for the Health Care Professional (Volume 1), and the American Medical Association-Drug Evaluations were evaluated. Specifically, the total number of unlabeled and unique uses (ie, not listed in either of the other two compendia) of 35 antineoplastic drugs were compared. Using a nonparametric analysis of variance to evaluate the results, significant differences in both the average unlabeled indications per drug and unique unlabeled indications per drug were found among the resources checked. The implications of the study results on reimbursement by private insurance carriers of unlabeled antineoplastic drug use is discussed in this article.
A recurrent neural network for classification of unevenly sampled variable stars
NASA Astrophysics Data System (ADS)
Naul, Brett; Bloom, Joshua S.; Pérez, Fernando; van der Walt, Stéfan
2018-02-01
Astronomical surveys of celestial sources produce streams of noisy time series measuring flux versus time (`light curves'). Unlike in many other physical domains, however, large (and source-specific) temporal gaps in data arise naturally due to intranight cadence choices as well as diurnal and seasonal constraints1-5. With nightly observations of millions of variable stars and transients from upcoming surveys4,6, efficient and accurate discovery and classification techniques on noisy, irregularly sampled data must be employed with minimal human-in-the-loop involvement. Machine learning for inference tasks on such data traditionally requires the laborious hand-coding of domain-specific numerical summaries of raw data (`features')7. Here, we present a novel unsupervised autoencoding recurrent neural network8 that makes explicit use of sampling times and known heteroskedastic noise properties. When trained on optical variable star catalogues, this network produces supervised classification models that rival other best-in-class approaches. We find that autoencoded features learned in one time-domain survey perform nearly as well when applied to another survey. These networks can continue to learn from new unlabelled observations and may be used in other unsupervised tasks, such as forecasting and anomaly detection.
Watson, Robert A
2014-08-01
To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.
Deering, Cassandra E; Tadjiki, Soheyl; Assemi, Shoeleh; Miller, Jan D; Yost, Garold S; Veranth, John M
2008-01-01
A novel methodology to detect unlabeled inorganic nanoparticles was experimentally demonstrated using a mixture of nano-sized (70 nm) and submicron (250 nm) silicon dioxide particles added to mammalian tissue. The size and concentration of environmentally relevant inorganic particles in a tissue sample can be determined by a procedure consisting of matrix digestion, particle recovery by centrifugation, size separation by sedimentation field-flow fractionation (SdFFF), and detection by light scattering. Background Laboratory nanoparticles that have been labeled by fluorescence, radioactivity, or rare elements have provided important information regarding nanoparticle uptake and translocation, but most nanomaterials that are commercially produced for industrial and consumer applications do not contain a specific label. Methods Both nitric acid digestion and enzyme digestion were tested with liver and lung tissue as well as with cultured cells. Tissue processing with a mixture of protease enzymes is preferred because it is applicable to a wide range of particle compositions. Samples were visualized via fluorescence microscopy and transmission electron microscopy to validate the SdFFF results. We describe in detail the tissue preparation procedures and discuss method sensitivity compared to reported levels of nanoparticles in vivo. Conclusion Tissue digestion and SdFFF complement existing techniques by precisely identifying unlabeled metal oxide nanoparticles and unambiguously distinguishing nanoparticles (diameter<100 nm) from both soluble compounds and from larger particles of the same nominal elemental composition. This is an exciting capability that can facilitate epidemiological and toxicological research on natural and manufactured nanomaterials. PMID:19055780
Classification of foods by transferring knowledge from ImageNet dataset
NASA Astrophysics Data System (ADS)
Heravi, Elnaz J.; Aghdam, Hamed H.; Puig, Domenec
2017-03-01
Automatic classification of foods is a way to control food intake and tackle with obesity. However, it is a challenging problem since foods are highly deformable and complex objects. Results on ImageNet dataset have revealed that Convolutional Neural Network has a great expressive power to model natural objects. Nonetheless, it is not trivial to train a ConvNet from scratch for classification of foods. This is due to the fact that ConvNets require large datasets and to our knowledge there is not a large public dataset of food for this purpose. Alternative solution is to transfer knowledge from trained ConvNets to the domain of foods. In this work, we study how transferable are state-of-art ConvNets to the task of food classification. We also propose a method for transferring knowledge from a bigger ConvNet to a smaller ConvNet by keeping its accuracy similar to the bigger ConvNet. Our experiments on UECFood256 datasets show that Googlenet, VGG and residual networks produce comparable results if we start transferring knowledge from appropriate layer. In addition, we show that our method is able to effectively transfer knowledge to the smaller ConvNet using unlabeled samples.
Masunaga, S; Sakurai, Y; Tanaka, H; Hirayama, R; Matsumoto, Y; Uzawa, A; Suzuki, M; Kondo, N; Narabayashi, M; Maruhashi, A; Ono, K
2013-01-01
Objective To detect the radiosensitivity of intratumour quiescent (Q) cells unlabelled with pimonidazole to accelerated carbon ion beams and the boron neutron capture reaction (BNCR). Methods EL4 tumour-bearing C57BL/J mice received 5-bromo-29-deoxyuridine (BrdU) continuously to label all intratumour proliferating (P) cells. After the administration of pimonidazole, tumours were irradiated with c-rays, accelerated carbon ion beams or reactor neutron beams with the prior administration of a 10B-carrier. Responses of intratumour Q and total (P+Q) cell populations were assessed based on frequencies of micronucleation and apoptosis using immunofluorescence staining for BrdU. The response of pimonidazole-unlabelled tumour cells was assessed by means of apoptosis frequency using immunofluorescence staining for pimonidazole. Results Following c-ray irradiation, the pimonidazole-unlabelled tumour cell fraction showed significantly enhanced radiosensitivity compared with the whole tumour cell fraction, more remarkably in the Q than total cell populations. However, a significantly greater decrease in radiosensitivity in the pimonidazole-unlabelled cell fraction, evaluated using a delayed assay or a decrease in radiation dose rate, was more clearly observed among the Q than total cells. These changes in radiosensitivity were suppressed following carbon ion beam and neutron beam-only irradiaton. In the BNCR, the use of a 10B-carrier, especially L-para-boronophenylalanine-10B, enhanced the sensitivity of the pimonidazole-unlabelled cells more clearly in the Q than total cells. Conclusion The radiosensitivity of the pimonidazole-unlabelled cell fraction depends on the quality of radiation delivered and characteristics of the 10B-carrier used in the BNCR. Advances in knowledge The pimonidazole-unlabelled subfraction of Q tumour cells may be a critical target in tumour control. PMID:23255546
Masunaga, S; Sakurai, Y; Tanaka, H; Hirayama, R; Matsumoto, Y; Uzawa, A; Suzuki, M; Kondo, N; Narabayashi, M; Maruhashi, A; Ono, K
2013-01-01
To detect the radiosensitivity of intratumour quiescent (Q) cells unlabelled with pimonidazole to accelerated carbon ion beams and the boron neutron capture reaction (BNCR). EL4 tumour-bearing C57BL/J mice received 5-bromo-2'-deoxyuridine (BrdU) continuously to label all intratumour proliferating (P) cells. After the administration of pimonidazole, tumours were irradiated with γ-rays, accelerated carbon ion beams or reactor neutron beams with the prior administration of a (10)B-carrier. Responses of intratumour Q and total (P+Q) cell populations were assessed based on frequencies of micronucleation and apoptosis using immunofluorescence staining for BrdU. The response of pimonidazole-unlabelled tumour cells was assessed by means of apoptosis frequency using immunofluorescence staining for pimonidazole. Following γ-ray irradiation, the pimonidazole-unlabelled tumour cell fraction showed significantly enhanced radiosensitivity compared with the whole tumour cell fraction, more remarkably in the Q than total cell populations. However, a significantly greater decrease in radiosensitivity in the pimonidazole-unlabelled cell fraction, evaluated using a delayed assay or a decrease in radiation dose rate, was more clearly observed among the Q than total cells. These changes in radiosensitivity were suppressed following carbon ion beam and neutron beam-only irradiaton. In the BNCR, the use of a (10)B-carrier, especially L-para-boronophenylalanine-(10)B, enhanced the sensitivity of the pimonidazole-unlabelled cells more clearly in the Q than total cells. The radiosensitivity of the pimonidazole-unlabelled cell fraction depends on the quality of radiation delivered and characteristics of the (10)B-carrier used in the BNCR. The pimonidazole-unlabelled subfraction of Q tumour cells may be a critical target in tumour control.
Pope, Lizzy; Wolf, Randi L
2012-01-01
This pilot study examined whether informing children of the presence of vegetables in select snack food items alters taste preference. A random sample of 68 elementary and middle school children tasted identical pairs of 3 snack food items containing vegetables. In each pair, 1 sample's label included the food's vegetable (eg, broccoli gingerbread spice cake), and 1 sample's label did not (eg, gingerbread spice cake). Participants reported whether the samples tasted the same, or whether they preferred one sample. Frequency of vegetable consumption was also assessed. Taste preferences did not differ for the labeled versus the unlabeled sample of zucchini chocolate chip bread, χ(2) (2, n = 68) = 3.21, P = .20 or broccoli gingerbread spice cake χ(2) (2, n = 68) = 2.15, P = .34. However, students preferred the unlabeled cookies (ie, chocolate chip cookies) over the vegetable-labeled version (ie, chickpea chocolate chip cookies), χ(2) = (2, n = 68) 9.21, P = .01. Chickpeas were consumed less frequently (81% had not tried in past year) as compared to zucchini and broccoli. Informing children of the presence of vegetables hidden within snack food may or may not alter taste preference and may depend on the frequency of prior exposure to the vegetable. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Zhou, Jie; Coles, Lisa D; Kartha, Reena V; Nash, Nardina; Mishra, Usha; Lund, Troy C; Cloyd, James C
2015-08-01
There is an increasing interest in using N-acetylcysteine (NAC) as a treatment for neurodegenerative disorders to increase glutathione (GSH) levels and its redox status. The purpose of this study was to characterize the biosynthesis of NAC to GSH using a novel stable isotope-labeled technique, and investigate the pharmacodynamics of NAC in vivo. Female wild-type mice were given a single intravenous bolus dose of 150 mg kg(-1) stable-labeled NAC. Plasma, red blood cells (RBC), and brain tissues were collected at predesignated time points. Stable-labeled NAC and its metabolite GSH (both labeled and unlabeled forms) were quantified in blood and brain samples. Molar ratios of the reduced and oxidized forms of GSH (GSH divided by glutathione disulfide, redox ratio) were also determined. The elimination phase half-life of NAC was approximately 34 min. Both labeled and unlabeled GSH in RBC were found to increase; however, the area under the curve above baseline (AUCb0-280 ) of labeled GSH was only 1% of the unlabeled form. These data indicate that NAC is not a direct precursor of GSH. In addition, NAC has prolonged effects in brain even when the drug has been eliminated from systemic circulation. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Zhou, Feng; Noor, M Omair; Krull, Ulrich J
2015-09-24
Bioassays based on cellulose paper substrates are gaining increasing popularity for the development of field portable and low-cost diagnostic applications. Herein, we report a paper-based nucleic acid hybridization assay using immobilized upconversion nanoparticles (UCNPs) as donors in luminescence resonance energy transfer (LRET). UCNPs with intense green emission served as donors with Cy3 dye as the acceptor. The avidin functionalized UCNPs were immobilized on cellulose paper and subsequently bioconjugated to biotinylated oligonucleotide probes. Introduction of unlabeled oligonucleotide targets resulted in a formation of probe-target duplexes. A subsequent hybridization of Cy3 labeled reporter with the remaining single stranded portion of target brought the Cy3 dye in close proximity to the UCNPs to trigger a LRET-sensitized emission from the acceptor dye. The hybridization assays provided a limit of detection (LOD) of 146.0 fmol and exhibited selectivity for one base pair mismatch discrimination. The assay was functional even in undiluted serum samples. This work embodies important progress in developing DNA hybridization assays on paper. Detection of unlabeled targets is achieved using UCNPs as LRET donors, with minimization of background signal from paper substrates owing to the implementation of low energy near-infrared (NIR) excitation.
Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J
2016-10-01
Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Crowdsourcing reproducible seizure forecasting in human and canine epilepsy
Wagenaar, Joost; Abbot, Drew; Adkins, Phillip; Bosshard, Simone C.; Chen, Min; Tieng, Quang M.; He, Jialune; Muñoz-Almaraz, F. J.; Botella-Rocamora, Paloma; Pardo, Juan; Zamora-Martinez, Francisco; Hills, Michael; Wu, Wei; Korshunova, Iryna; Cukierski, Will; Vite, Charles; Patterson, Edward E.; Litt, Brian; Worrell, Gregory A.
2016-01-01
See Mormann and Andrzejak (doi:10.1093/brain/aww091) for a scientific commentary on this article. Accurate forecasting of epileptic seizures has the potential to transform clinical epilepsy care. However, progress toward reliable seizure forecasting has been hampered by lack of open access to long duration recordings with an adequate number of seizures for investigators to rigorously compare algorithms and results. A seizure forecasting competition was conducted on kaggle.com using open access chronic ambulatory intracranial electroencephalography from five canines with naturally occurring epilepsy and two humans undergoing prolonged wide bandwidth intracranial electroencephalographic monitoring. Data were provided to participants as 10-min interictal and preictal clips, with approximately half of the 60 GB data bundle labelled (interictal/preictal) for algorithm training and half unlabelled for evaluation. The contestants developed custom algorithms and uploaded their classifications (interictal/preictal) for the unknown testing data, and a randomly selected 40% of data segments were scored and results broadcasted on a public leader board. The contest ran from August to November 2014, and 654 participants submitted 17 856 classifications of the unlabelled test data. The top performing entry scored 0.84 area under the classification curve. Following the contest, additional held-out unlabelled data clips were provided to the top 10 participants and they submitted classifications for the new unseen data. The resulting area under the classification curves were well above chance forecasting, but did show a mean 6.54 ± 2.45% (min, max: 0.30, 20.2) decline in performance. The kaggle.com model using open access data and algorithms generated reproducible research that advanced seizure forecasting. The overall performance from multiple contestants on unseen data was better than a random predictor, and demonstrates the feasibility of seizure forecasting in canine and human epilepsy. PMID:27034258
Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.
Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie
2016-07-01
Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.
In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images.
Christiansen, Eric M; Yang, Samuel J; Ando, D Michael; Javaherian, Ashkan; Skibinski, Gaia; Lipnick, Scott; Mount, Elliot; O'Neil, Alison; Shah, Kevan; Lee, Alicia K; Goyal, Piyush; Fedus, William; Poplin, Ryan; Esteva, Andre; Berndl, Marc; Rubin, Lee L; Nelson, Philip; Finkbeiner, Steven
2018-04-19
Microscopy is a central method in life sciences. Many popular methods, such as antibody labeling, are used to add physical fluorescent labels to specific cellular constituents. However, these approaches have significant drawbacks, including inconsistency; limitations in the number of simultaneous labels because of spectral overlap; and necessary perturbations of the experiment, such as fixing the cells, to generate the measurement. Here, we show that a computational machine-learning approach, which we call "in silico labeling" (ISL), reliably predicts some fluorescent labels from transmitted-light images of unlabeled fixed or live biological samples. ISL predicts a range of labels, such as those for nuclei, cell type (e.g., neural), and cell state (e.g., cell death). Because prediction happens in silico, the method is consistent, is not limited by spectral overlap, and does not disturb the experiment. ISL generates biological measurements that would otherwise be problematic or impossible to acquire. Copyright © 2018 Elsevier Inc. All rights reserved.
Aydogan, Bulent; Li, Ji; Rajh, Tijana; Chaudhary, Ahmed; Chmura, Steven J; Pelizzari, Charles; Wietholt, Christian; Kurtoglu, Metin; Redmond, Peter
2010-10-01
To study the feasibility of using 2-deoxy-D-glucose (2-DG)-labeled gold nanoparticle (AuNP-DG) as a computed tomography (CT) contrast agent with tumor targeting capability through in vitro experiments. Gold nanoparticles (AuNP) were fabricated and were conjugated with 2-deoxy-D-glucose. The human alveolar epithelial cancer cell line, A-549, was chosen for the in vitro cellular uptake assay. Two groups of cell samples were incubated with the AuNP-DG and the unlabeled AuNP, respectively. Following the incubation, the cells were washed with sterile PBS to remove the excess gold nanoparticles and spun to cell pellets using a centrifuge. The cell pellets were imaged using a microCT scanner immediately after the centrifugation. The reconstructed CT images were analyzed using a commercial software package. Significant contrast enhancement in the cell samples incubated with the AuNP-DG with respect to the cell samples incubated with the unlabeled AuNP was observed in multiple CT slices. Results from this study demonstrate enhanced uptake of 2-DG-labeled gold nanoparticle by cancer cells in vitro and warrant further experiments to study the exact molecular mechanism by which the AuNP-DG is internalized and retained in the tumor cells.
Jiang, Yizhang; Wu, Dongrui; Deng, Zhaohong; Qian, Pengjiang; Wang, Jun; Wang, Guanjin; Chung, Fu-Lai; Choi, Kup-Sze; Wang, Shitong
2017-12-01
Recognition of epileptic seizures from offline EEG signals is very important in clinical diagnosis of epilepsy. Compared with manual labeling of EEG signals by doctors, machine learning approaches can be faster and more consistent. However, the classification accuracy is usually not satisfactory for two main reasons: the distributions of the data used for training and testing may be different, and the amount of training data may not be enough. In addition, most machine learning approaches generate black-box models that are difficult to interpret. In this paper, we integrate transductive transfer learning, semi-supervised learning and TSK fuzzy system to tackle these three problems. More specifically, we use transfer learning to reduce the discrepancy in data distribution between the training and testing data, employ semi-supervised learning to use the unlabeled testing data to remedy the shortage of training data, and adopt TSK fuzzy system to increase model interpretability. Two learning algorithms are proposed to train the system. Our experimental results show that the proposed approaches can achieve better performance than many state-of-the-art seizure classification algorithms.
PCA feature extraction for change detection in multidimensional unlabeled data.
Kuncheva, Ludmila I; Faithfull, William J
2014-01-01
When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.
Unlabeled oligonucleotides as internal temperature controls for genotyping by amplicon melting.
Seipp, Michael T; Durtschi, Jacob D; Liew, Michael A; Williams, Jamie; Damjanovich, Kristy; Pont-Kingdon, Genevieve; Lyon, Elaine; Voelkerding, Karl V; Wittwer, Carl T
2007-07-01
Amplicon melting is a closed-tube method for genotyping that does not require probes, real-time analysis, or allele-specific polymerase chain reaction. However, correct differentiation of homozygous mutant and wild-type samples by melting temperature (Tm) requires high-resolution melting and closely controlled reaction conditions. When three different DNA extraction methods were used to isolate DNA from whole blood, amplicon Tm differences of 0.03 to 0.39 degrees C attributable to the extractions were observed. To correct for solution chemistry differences between samples, complementary unlabeled oligonucleotides were included as internal temperature controls to shift and scale the temperature axis of derivative melting plots. This adjustment was applied to a duplex amplicon melting assay for the methylenetetrahydrofolate reductase variants 1298A>C and 677C>T. High- and low-temperature controls bracketing the amplicon melting region decreased the Tm SD within homozygous genotypes by 47 to 82%. The amplicon melting assay was 100% concordant to an adjacent hybridization probe (HybProbe) melting assay when temperature controls were included, whereas a 3% error rate was observed without temperature correction. In conclusion, internal temperature controls increase the accuracy of genotyping by high-resolution amplicon melting and should also improve results on lower resolution instruments.
Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information
NASA Astrophysics Data System (ADS)
Jamshidpour, N.; Homayouni, S.; Safari, A.
2017-09-01
Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.
Pacharawongsakda, Eakasit; Theeramunkong, Thanaruk
2013-12-01
Predicting protein subcellular location is one of major challenges in Bioinformatics area since such knowledge helps us understand protein functions and enables us to select the targeted proteins during drug discovery process. While many computational techniques have been proposed to improve predictive performance for protein subcellular location, they have several shortcomings. In this work, we propose a method to solve three main issues in such techniques; i) manipulation of multiplex proteins which may exist or move between multiple cellular compartments, ii) handling of high dimensionality in input and output spaces and iii) requirement of sufficient labeled data for model training. Towards these issues, this work presents a new computational method for predicting proteins which have either single or multiple locations. The proposed technique, namely iFLAST-CORE, incorporates the dimensionality reduction in the feature and label spaces with co-training paradigm for semi-supervised multi-label classification. For this purpose, the Singular Value Decomposition (SVD) is applied to transform the high-dimensional feature space and label space into the lower-dimensional spaces. After that, due to limitation of labeled data, the co-training regression makes use of unlabeled data by predicting the target values in the lower-dimensional spaces of unlabeled data. In the last step, the component of SVD is used to project labels in the lower-dimensional space back to those in the original space and an adaptive threshold is used to map a numeric value to a binary value for label determination. A set of experiments on viral proteins and gram-negative bacterial proteins evidence that our proposed method improve the classification performance in terms of various evaluation metrics such as Aiming (or Precision), Coverage (or Recall) and macro F-measure, compared to the traditional method that uses only labeled data.
Lien, Stina K; Kvitvang, Hans Fredrik Nyvold; Bruheim, Per
2012-07-20
GC-MS analysis of silylated metabolites is a sensitive method that covers important metabolite groups such as sugars, amino acids and non-amino organic acids, and it has become one of the most important analytical methods for exploring the metabolome. Absolute quantitative GC-MS analysis of silylated metabolites poses a challenge as different metabolites have different derivatization kinetics and as their silyl-derivates have varying stability. This report describes the development of a targeted GC-MS/MS method for quantification of metabolites. Internal standards for each individual metabolite were obtained by derivatization of a mixture of standards with deuterated N-methyl-N-trimethylsilyltrifluoroacetamide (d9-MSTFA), and spiking this solution into MSTFA derivatized samples prior to GC-MS/MS analysis. The derivatization and spiking protocol needed optimization to ensure that the behaviour of labelled compound responses in the spiked sample correctly reflected the behaviour of unlabelled compound responses. Using labelled and unlabelled MSTFA in this way enabled normalization of metabolite responses by the response of their deuterated counterpart (i.e. individual correction). Such individual correction of metabolite responses reproducibly resulted in significantly higher precision than traditional data correction strategies when tested on samples both with and without serum and urine matrices. The developed method is thus a valuable contribution to the field of absolute quantitative metabolomics. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, Jonell N.; V. White, Gregory; White, Michael I.; Bernstein, Robert; Hochrein, James M.
2012-09-01
Aged materials, such as polymers, can exhibit modifications to their chemical structure and physical properties, which may render the material ineffective for its intended purpose. Isotopic labeling was used to characterize low-molecular weight volatile thermal-oxidative degradation products of nylon 6.6 in an effort to better understand and predict changes in the aged polymer. Headspace gas from aged (up to 243 d at 138 °C) nylon 6.6 monomers (adipic acid and 1,6-hexanediamine) and polymer were preconcentrated, separated, and detected using cryofocusing gas chromatography mass spectrometry (cryo-GC/MS). Observations regarding the relative concentrations observed in each chromatographic peak with respect to aging time were used in conjunction with mass spectra for samples aged under ambient air to determine the presence and identity of 18 degradation products. A comparison of the National Institute of Standards and Technology (NIST) library, unlabeled, and isotopically labeled mass spectra (C-13 or N-15) and expected fragmentation pathways of each degradation product were used to identify the location of isotopically labeled atoms within the product's chemical structure, which can later be used to determine the exact origin of the species. In addition, observations for unlabeled nylon 6.6 aged in an O-18 enriched atmosphere were used to determine if the source of oxygen in the applicable degradation products was from the gaseous environment or the polymer. Approximations for relative isotopic ratios of unlabeled to labeled products are reported, where appropriate.
Kernel Extended Real-Valued Negative Selection Algorithm (KERNSA)
2013-06-01
are discarded, which is similar to how T-cells function in the BIS. An unlabeled, future sample is considered non -self if any detectors match it. This...Affinity Performs Best With Each type of Dataset 65 5.1.4 More Kernel Functions . . . . . . . . . . . . . . . . . . . . . . . . 65 5.1.5 Automate the...13 2.5 The Negative Selection Algorithm (NSA). . . . . . . . . . . . . . . . . . . . . 16 2.6 Illustration of self and non -self
Kim, Jeongyong; Song, Hugeun; Park, Inho; Carlisle, Christine R; Bonin, Keith; Guthold, Martin
2011-03-01
Deep ultraviolet (DUV) microscopy is a fluorescence microscopy technique to image unlabeled proteins via the native fluorescence of some of their amino acids. We constructed a DUV fluorescence microscope, capable of 280 nm wavelength excitation by modifying an inverted optical microscope. Moreover, we integrated a nanomanipulator-controlled micropipette into this instrument for precise delivery of picoliter amounts of fluid to selected regions of the sample. In proof-of-principle experiments, we used this instrument to study, in situ, the effect of a denaturing agent on the autofluorescence intensity of single, unlabeled, electrospun fibrinogen nanofibers. Autofluorescence emission from the nanofibers was excited at 280 nm and detected at ∼350 nm. A denaturant solution was discretely applied to small, select sections of the nanofibers and a clear local reduction in autofluorescence intensity was observed. This reduction is attributed to the dissolution of the fibers and the unfolding of proteins in the fibers. Copyright © 2010 Wiley-Liss, Inc.
Hupert, Michelle; Elfgen, Anne; Schartmann, Elena; Schemmert, Sarah; Buscher, Brigitte; Kutzsche, Janine; Willbold, Dieter; Santiago-Schübel, Beatrix
2018-01-15
During preclinical drug development, a method for quantification of unlabeled compounds in blood plasma samples from treatment or pharmacokinetic studies in mice is required. In the current work, a rapid, specific, sensitive and validated liquid chromatography mass-spectrometric UHPLC-ESI-QTOF-MS method was developed for the quantification of the therapeutic compound RD2 in mouse plasma. RD2 is an all-D-enantiomeric peptide developed for the treatment of Alzheimer's disease, a progressive neurodegenerative disease finally leading to dementia. Due to RD2's highly hydrophilic properties, the sample preparation and the chromatographic separation and quantification were very challenging. The chromatographic separation of RD2 and its internal standard were accomplished on an Acquity UPLC BEH C18 column (2.1 × 100 mm, 1.7 μm particle size) within 6.5 min at 50 °C with a flow rate of 0.5 mL/min. Mobile phases consisted of water and acetonitrile with 1% formic acid and 0.025% heptafluorobutyric acid, respectively. Ions were generated by electrospray ionization (ESI) in the positive mode and the peptide was quantified by QTOF-MS. The developed extraction method for RD2 from mouse plasma revealed complete recovery. The linearity of the calibration curve was in the range of 5.3 ng/mL to 265 ng/mL (r 2 > 0.999) with a lower limit of detection (LLOD) of 2.65 ng/mL and a lower limit of quantification (LLOQ) of 5.3 ng/mL. The intra-day and inter-day accuracy and precision of RD2 in plasma ranged from -0.54% to 2.21% and from 1.97% to 8.18%, respectively. Moreover, no matrix effects were observed and RD2 remained stable in extracted mouse plasma at different conditions. Using this validated bioanalytical method, plasma samples of unlabeled RD2 or placebo treated mice were analyzed. The herein developed UHPLC-ESI-QTOF-MS method is a suitable tool for the quantitative analysis of unlabeled RD2 in plasma samples of treated mice. Copyright © 2017 Elsevier B.V. All rights reserved.
Receptor binding kinetics equations: Derivation using the Laplace transform method.
Hoare, Sam R J
Measuring unlabeled ligand receptor binding kinetics is valuable in optimizing and understanding drug action. Unfortunately, deriving equations for estimating kinetic parameters is challenging because it involves calculus; integration can be a frustrating barrier to the pharmacologist seeking to measure simple rate parameters. Here, a well-known tool for simplifying the derivation, the Laplace transform, is applied to models of receptor-ligand interaction. The method transforms differential equations to a form in which simple algebra can be applied to solve for the variable of interest, for example the concentration of ligand-bound receptor. The goal is to provide instruction using familiar examples, to enable investigators familiar with handling equilibrium binding equations to derive kinetic equations for receptor-ligand interaction. First, the Laplace transform is used to derive the equations for association and dissociation of labeled ligand binding. Next, its use for unlabeled ligand kinetic equations is exemplified by a full derivation of the kinetics of competitive binding equation. Finally, new unlabeled ligand equations are derived using the Laplace transform. These equations incorporate a pre-incubation step with unlabeled or labeled ligand. Four equations for measuring unlabeled ligand kinetics were compared and the two new equations verified by comparison with numerical solution. Importantly, the equations have not been verified with experimental data because no such experiments are evident in the literature. Equations were formatted for use in the curve-fitting program GraphPad Prism 6.0 and fitted to simulated data. This description of the Laplace transform method will enable pharmacologists to derive kinetic equations for their model or experimental paradigm under study. Application of the transform will expand the set of equations available for the pharmacologist to measure unlabeled ligand binding kinetics, and for other time-dependent pharmacological activities. Copyright © 2017 Elsevier Inc. All rights reserved.
Mental illness stigma, secrecy and suicidal ideation.
Oexle, N; Ajdacic-Gross, V; Kilian, R; Müller, M; Rodgers, S; Xu, Z; Rössler, W; Rüsch, N
2017-02-01
Whether the public stigma associated with mental illness negatively affects an individual, largely depends on whether the person has been labelled 'mentally ill'. For labelled individuals concealing mental illness is a common strategy to cope with mental illness stigma, despite secrecy's potential negative consequences. In addition, initial evidence points to a link between stigma and suicidality, but quantitative data from community samples are lacking. Based on previous literature about mental illness stigma and suicidality, as well as about the potential influence of labelling processes and secrecy, a theory-driven model linking perceived mental illness stigma and suicidal ideation by a mediation of secrecy and hopelessness was established. This model was tested separately among labelled and unlabelled persons using data derived from a Swiss cross-sectional population-based study. A large community sample of people with elevated psychiatric symptoms was examined by interviews and self-report, collecting information on perceived stigma, secrecy, hopelessness and suicidal ideation. Participants who had ever used mental health services were considered as labelled 'mentally ill'. A descriptive analysis, stratified logistic regression models and a path analysis testing a three-path mediation effect were conducted. While no significant differences between labelled and unlabelled participants were observed regarding perceived stigma and secrecy, labelled individuals reported significantly higher frequencies of suicidal ideation and feelings of hopelessness. More perceived stigma was associated with suicidal ideation among labelled, but not among unlabelled individuals. In the path analysis, this link was mediated by increased secrecy and hopelessness. Results from this study indicate that among persons labelled 'mentally ill', mental illness stigma is a contributor to suicidal ideation. One explanation for this association is the relation perceived stigma has with secrecy, which introduces negative emotional consequences. If our findings are replicated, they would suggest that programmes empowering people in treatment for mental illness to cope with anticipated and experienced discrimination as well as interventions to reduce public stigma within society could improve suicide prevention.
Foster, David J R; Morton, Erin B; Heinkele, Georg; Mürdter, Thomas E; Somogyi, Andrew A
2006-08-01
There is evidence that the apparent oral clearance of rac-methadone is induced during the early phase of methadone maintenance treatment. However, it is not known if this is due to changes in bioavailability or if this phenomenon is stereoselective. This knowledge can be obtained by administering a dose of stable-labeled methadone at selected times during ongoing treatment. Therefore, the authors developed a stereoselective high performance liquid chromatography-atmospheric pressure chemical ionization mass-spectrometry assay for the quantification of the enantiomers of methadone and a d(6)-labeled isotopomer. The compounds were quantified in a single assay after liquid-liquid extraction and stereoselective high performance liquid chromatograph with atmospheric pressure chemical ionization-mass spectrometry detection. The following ions were monitored: m/z 310.15 for unlabeled methadone; m/z 316.15 for methadone-d(6); and m/z 313.15 for the methadone-d(3) (internal standard). Calibration curves ranged from 0.5 to 75 ng/mL for each compound. Extraction recovery was approximately 80% for all analytes, without evidence of differences between the unlabeled and stable-labeled compounds or concentration dependency. Minor ion promotion was observed (<15%) but this was identical for all analytes including the d(3)-labeled internal standard, with peak area ratios in extracted samples identical to control injections. The isotopomers did not alter each others' ionisation, even at 10:1 concentration ratios, and 10-fold diluted samples were within 10% of the nominal concentration. Assay performance was acceptable, with interassay and intra-assay bias and precision <10% for all compounds, including the upper and lower limits of quantitation. In conclusion, the assay was successfully applied to quantify the concentration of the methadone enantiomers of both orally administered unlabeled methadone and an intravenous 5 mg dose of methadone-d(6) in a patient receiving chronic oral methadone maintenance therapy.
Active Learning by Querying Informative and Representative Examples.
Huang, Sheng-Jun; Jin, Rong; Zhou, Zhi-Hua
2014-10-01
Active learning reduces the labeling cost by iteratively selecting the most valuable data to query their labels. It has attracted a lot of interests given the abundance of unlabeled data and the high cost of labeling. Most active learning approaches select either informative or representative unlabeled instances to query their labels, which could significantly limit their performance. Although several active learning algorithms were proposed to combine the two query selection criteria, they are usually ad hoc in finding unlabeled instances that are both informative and representative. We address this limitation by developing a principled approach, termed QUIRE, based on the min-max view of active learning. The proposed approach provides a systematic way for measuring and combining the informativeness and representativeness of an unlabeled instance. Further, by incorporating the correlation among labels, we extend the QUIRE approach to multi-label learning by actively querying instance-label pairs. Extensive experimental results show that the proposed QUIRE approach outperforms several state-of-the-art active learning approaches in both single-label and multi-label learning.
Crowdsourcing reproducible seizure forecasting in human and canine epilepsy.
Brinkmann, Benjamin H; Wagenaar, Joost; Abbot, Drew; Adkins, Phillip; Bosshard, Simone C; Chen, Min; Tieng, Quang M; He, Jialune; Muñoz-Almaraz, F J; Botella-Rocamora, Paloma; Pardo, Juan; Zamora-Martinez, Francisco; Hills, Michael; Wu, Wei; Korshunova, Iryna; Cukierski, Will; Vite, Charles; Patterson, Edward E; Litt, Brian; Worrell, Gregory A
2016-06-01
SEE MORMANN AND ANDRZEJAK DOI101093/BRAIN/AWW091 FOR A SCIENTIFIC COMMENTARY ON THIS ARTICLE : Accurate forecasting of epileptic seizures has the potential to transform clinical epilepsy care. However, progress toward reliable seizure forecasting has been hampered by lack of open access to long duration recordings with an adequate number of seizures for investigators to rigorously compare algorithms and results. A seizure forecasting competition was conducted on kaggle.com using open access chronic ambulatory intracranial electroencephalography from five canines with naturally occurring epilepsy and two humans undergoing prolonged wide bandwidth intracranial electroencephalographic monitoring. Data were provided to participants as 10-min interictal and preictal clips, with approximately half of the 60 GB data bundle labelled (interictal/preictal) for algorithm training and half unlabelled for evaluation. The contestants developed custom algorithms and uploaded their classifications (interictal/preictal) for the unknown testing data, and a randomly selected 40% of data segments were scored and results broadcasted on a public leader board. The contest ran from August to November 2014, and 654 participants submitted 17 856 classifications of the unlabelled test data. The top performing entry scored 0.84 area under the classification curve. Following the contest, additional held-out unlabelled data clips were provided to the top 10 participants and they submitted classifications for the new unseen data. The resulting area under the classification curves were well above chance forecasting, but did show a mean 6.54 ± 2.45% (min, max: 0.30, 20.2) decline in performance. The kaggle.com model using open access data and algorithms generated reproducible research that advanced seizure forecasting. The overall performance from multiple contestants on unseen data was better than a random predictor, and demonstrates the feasibility of seizure forecasting in canine and human epilepsy.media-1vid110.1093/brain/aww045_video_abstractaww045_video_abstract. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain.
Reducing Annotation Effort Using Generalized Expectation Criteria
2007-11-30
constraints additionally consider input variables. Active learning is a related problem in which the learner can choose the particular instances to be...labeled. In pool-based active learning [Cohn et al., 1994], the learner has access to a set of unlabeled instances, and can choose the instance that...has the highest expected utility according to some metric. A standard pool- based active learning method is uncertainty sampling [Lewis and Catlett
Unlabeled Oligonucleotides as Internal Temperature Controls for Genotyping by Amplicon Melting
Seipp, Michael T.; Durtschi, Jacob D.; Liew, Michael A.; Williams, Jamie; Damjanovich, Kristy; Pont-Kingdon, Genevieve; Lyon, Elaine; Voelkerding, Karl V.; Wittwer, Carl T.
2007-01-01
Amplicon melting is a closed-tube method for genotyping that does not require probes, real-time analysis, or allele-specific polymerase chain reaction. However, correct differentiation of homozygous mutant and wild-type samples by melting temperature (Tm) requires high-resolution melting and closely controlled reaction conditions. When three different DNA extraction methods were used to isolate DNA from whole blood, amplicon Tm differences of 0.03 to 0.39°C attributable to the extractions were observed. To correct for solution chemistry differences between samples, complementary unlabeled oligonucleotides were included as internal temperature controls to shift and scale the temperature axis of derivative melting plots. This adjustment was applied to a duplex amplicon melting assay for the methylenetetrahydrofolate reductase variants 1298A>C and 677C>T. High- and low-temperature controls bracketing the amplicon melting region decreased the Tm SD within homozygous genotypes by 47 to 82%. The amplicon melting assay was 100% concordant to an adjacent hybridization probe (HybProbe) melting assay when temperature controls were included, whereas a 3% error rate was observed without temperature correction. In conclusion, internal temperature controls increase the accuracy of genotyping by high-resolution amplicon melting and should also improve results on lower resolution instruments. PMID:17591926
Li, Yanpeng; Hu, Xiaohua; Lin, Hongfei; Yang, Zhihao
2011-01-01
Feature representation is essential to machine learning and text mining. In this paper, we present a feature coupling generalization (FCG) framework for generating new features from unlabeled data. It selects two special types of features, i.e., example-distinguishing features (EDFs) and class-distinguishing features (CDFs) from original feature set, and then generalizes EDFs into higher-level features based on their coupling degrees with CDFs in unlabeled data. The advantage is: EDFs with extreme sparsity in labeled data can be enriched by their co-occurrences with CDFs in unlabeled data so that the performance of these low-frequency features can be greatly boosted and new information from unlabeled can be incorporated. We apply this approach to three tasks in biomedical literature mining: gene named entity recognition (NER), protein-protein interaction extraction (PPIE), and text classification (TC) for gene ontology (GO) annotation. New features are generated from over 20 GB unlabeled PubMed abstracts. The experimental results on BioCreative 2, AIMED corpus, and TREC 2005 Genomics Track show that 1) FCG can utilize well the sparse features ignored by supervised learning. 2) It improves the performance of supervised baselines by 7.8 percent, 5.0 percent, and 5.8 percent, respectively, in the tree tasks. 3) Our methods achieve 89.1, 64.5 F-score, and 60.1 normalized utility on the three benchmark data sets.
Phan, Jenny-Ann; Landau, Anne M; Jakobsen, Steen; Wong, Dean F; Gjedde, Albert
2017-11-22
We describe a novel method of kinetic analysis of radioligand binding to neuroreceptors in brain in vivo, here applied to noradrenaline receptors in rat brain. The method uses positron emission tomography (PET) of [ 11 C]yohimbine binding in brain to quantify the density and affinity of α 2 adrenoceptors under condition of changing radioligand binding to plasma proteins. We obtained dynamic PET recordings from brain of Spraque Dawley rats at baseline, followed by pharmacological challenge with unlabeled yohimbine (0.3 mg/kg). The challenge with unlabeled ligand failed to diminish radioligand accumulation in brain tissue, due to the blocking of radioligand binding to plasma proteins that elevated the free fractions of the radioligand in plasma. We devised a method that graphically resolved the masking of unlabeled ligand binding by the increase of radioligand free fractions in plasma. The Extended Inhibition Plot introduced here yielded an estimate of the volume of distribution of non-displaceable ligand in brain tissue that increased with the increase of the free fraction of the radioligand in plasma. The resulting binding potentials of the radioligand declined by 50-60% in the presence of unlabeled ligand. The kinetic unmasking of inhibited binding reflected in the increase of the reference volume of distribution yielded estimates of receptor saturation consistent with the binding of unlabeled ligand.
Application of DBNs for concerned internet information detecting
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Gao, Song
2017-03-01
In recent years, deep learning has achieved great success in many fields, ranging from voice recognition and image classification to computer vision. In this study we apply DBNs to concerned internet information in Chinese detecting problem, since there are inherent differences between English and Chinese. Contrastive divergence (CD) is employed in the DBNs to learn a multi-layer generative model from numerous unlabeled data. The features obtained by this model are used to initialize the feed-forward neural network, which can be fine-tuned with backpropagation. Experiment results indicate that, the model and training method we proposed can be used to detect the concerned internet information effectively and accurately.
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
Percha, Bethany; Altman, Russ B
2013-01-01
The biomedical literature presents a uniquely challenging text mining problem. Sentences are long and complex, the subject matter is highly specialized with a distinct vocabulary, and producing annotated training data for this domain is time consuming and expensive. In this environment, unsupervised text mining methods that do not rely on annotated training data are valuable. Here we investigate the use of random indexing, an automated method for producing vector-space semantic representations of words from large, unlabeled corpora, to address the problem of term normalization in sentences describing drugs and genes. We show that random indexing produces similarity scores that capture some of the structure of PHARE, a manually curated ontology of pharmacogenomics concepts. We further show that random indexing can be used to identify likely word candidates for inclusion in the ontology, and can help localize these new labels among classes and roles within the ontology.
Percha, Bethany; Altman, Russ B.
2013-01-01
The biomedical literature presents a uniquely challenging text mining problem. Sentences are long and complex, the subject matter is highly specialized with a distinct vocabulary, and producing annotated training data for this domain is time consuming and expensive. In this environment, unsupervised text mining methods that do not rely on annotated training data are valuable. Here we investigate the use of random indexing, an automated method for producing vector-space semantic representations of words from large, unlabeled corpora, to address the problem of term normalization in sentences describing drugs and genes. We show that random indexing produces similarity scores that capture some of the structure of PHARE, a manually curated ontology of pharmacogenomics concepts. We further show that random indexing can be used to identify likely word candidates for inclusion in the ontology, and can help localize these new labels among classes and roles within the ontology. PMID:24551397
NASA Astrophysics Data System (ADS)
Marchitto, T. M., Jr.; Mitra, R.; Zhong, B.; Ge, Q.; Kanakiya, B.; Lobaton, E.
2017-12-01
Identification and picking of foraminifera from sediment samples is often a laborious and repetitive task. Previous attempts to automate this process have met with limited success, but we show that recent advances in machine learning can be brought to bear on the problem. As a `proof of concept' we have developed a system that is capable of recognizing six species of extant planktonic foraminifera that are commonly used in paleoceanographic studies. Our pipeline begins with digital photographs taken under 16 different illuminations using an LED ring, which are then fused into a single 3D image. Labeled image sets were used to train various types of image classification algorithms, and performance on unlabeled image sets was measured in terms of precision (whether IDs are correct) and recall (what fraction of the target species are found). We find that Convolutional Neural Network (CNN) approaches achieve precision and recall values between 80 and 90%, which is similar precision and better recall than human expert performance using the same type of photographs. We have also trained a CNN to segment the 3D images into individual chambers and apertures, which can not only improve identification performance but also automate the measurement of foraminifera for morphometric studies. Given that there are only 35 species of extant planktonic foraminifera larger than 150 μm, we suggest that a fully automated characterization of this assemblage is attainable. This is the first step toward the realization of a foram picking robot.
Automatic Gleason grading of prostate cancer using quantitative phase imaging and machine learning
NASA Astrophysics Data System (ADS)
Nguyen, Tan H.; Sridharan, Shamira; Macias, Virgilia; Kajdacsy-Balla, Andre; Melamed, Jonathan; Do, Minh N.; Popescu, Gabriel
2017-03-01
We present an approach for automatic diagnosis of tissue biopsies. Our methodology consists of a quantitative phase imaging tissue scanner and machine learning algorithms to process these data. We illustrate the performance by automatic Gleason grading of prostate specimens. The imaging system operates on the principle of interferometry and, as a result, reports on the nanoscale architecture of the unlabeled specimen. We use these data to train a random forest classifier to learn textural behaviors of prostate samples and classify each pixel in the image into different classes. Automatic diagnosis results were computed from the segmented regions. By combining morphological features with quantitative information from the glands and stroma, logistic regression was used to discriminate regions with Gleason grade 3 versus grade 4 cancer in prostatectomy tissue. The overall accuracy of this classification derived from a receiver operating curve was 82%, which is in the range of human error when interobserver variability is considered. We anticipate that our approach will provide a clinically objective and quantitative metric for Gleason grading, allowing us to corroborate results across instruments and laboratories and feed the computer algorithms for improved accuracy.
The helpfulness of category labels in semi-supervised learning depends on category structure.
Vong, Wai Keen; Navarro, Daniel J; Perfors, Amy
2016-02-01
The study of semi-supervised category learning has generally focused on how additional unlabeled information with given labeled information might benefit category learning. The literature is also somewhat contradictory, sometimes appearing to show a benefit to unlabeled information and sometimes not. In this paper, we frame the problem differently, focusing on when labels might be helpful to a learner who has access to lots of unlabeled information. Using an unconstrained free-sorting categorization experiment, we show that labels are useful to participants only when the category structure is ambiguous and that people's responses are driven by the specific set of labels they see. We present an extension of Anderson's Rational Model of Categorization that captures this effect.
Revised Toxicological Assessment of ISS Air Quality: May 2012 - August 2012
NASA Technical Reports Server (NTRS)
Meyers, Valerie
2012-01-01
A summary of the analytical results from 12 grab sample containers (GSCs) collected on ISS and returned aboard 30S is shown in Table 1. The average recoveries of the 3 surrogate standards from the GSCs were as follows: 12C-acetone, 115 +/-- 11%; fluorobenzene, 108 +/- 8%; and chlorobenzene, 102 +/- 16%. Shaded rows indicate data that re limited due to low sample pressures. For completeness, previously reported data from the US Lab collected in May 2012 are included here as well. The revised report provides results from one returned sample that was unlabeled and originally assumed to be unused. The sample was prepared and analyzed for the purpose of measuring the surrogate compounds. It was later determined, based on serial number, that this was the HTB3 first ingress sample.
Millard, Yvette C; Slaughter, Robin J; Shieffelbien, Lucy M; Schep, Leo J
2014-09-26
To investigate poisoning exposures to chemicals that were unlabelled, mislabelled or not in their original containers in New Zealand over the last 10 years, based on calls to the New Zealand National Poisons Centre (NZNPC). Call data from the NZNPC between 2003 and 2012 were analysed retrospectively. Parameters reviewed included patient age, route and site of exposure, product classification and recommended intervention. Of the 324,411 calls received between 2003 and 2012, 100,465 calls were associated with acute human exposure to chemicals. There were 757 inquiries related to human exposure to mislabelled or unlabelled chemicals consisting of 0.75% of chemical exposures. Adults were involved in 51% of incidents, children, <5 years 32%, 5-10 years 10%, and adolescents 5%. Child exploratory behaviour was responsible for 38% of calls and adult unintentional exposures 61%. Medical attention was advised in 26% of calls. Inadvertent exposure to toxic products stored in unlabelled or mislabelled containers is a problem for all age groups. Although it represents a small proportion of total calls to the NZNPC it remains a potential risk for serious poisoning. It is important that chemicals are stored securely, in their original containers, and never stored in drinking vessels.
Radioassay kit for method of determining methotrexate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charm, S.E.; Blair, H.E.
1978-07-25
A radioassay system for the determination of methotrexate in biological fluids based on the competitive binding of labeled and unlabeled methotrexate to the enzyme dihydrofolate reductase. Samples of unknown methotrexate level are mixed with I/sup 125/ labeled methotrexate. A portion of the total methotrexate present is bound by the addition of enzyme, and the unbound methotrexate is removed with charcoal. The level of bound I/sup 125/ labeled methotrexate is measured in a gamma counter. To calculate the methotrexate level of the unknown samples, the displacement of bound labeled methotrexate caused by the unknowns is compared to the displacement caused bymore » known methotrexate standards.« less
Immunoassays for Identification of Biological Agents in Sample Unknowns: NATO SlBCA Exercise VI
2005-12-01
Yersiniapestis 103 cfu/mL Enzyme-linked immunosorbent assays Antibodies Unlabelled antibodies Antibody stocks developed under DRES contract by SciLab Consulting...goat anti-rabbit IgG, (whole molecule, lot no. 90H8990). Antibody purification Antibodies produced by Scilab Consulting Inc. were purified on a Bio...No. W7702-4-R430, Final Report. Scilab Consulting Inc. DRDC Suffield TM 2005-223 17 14. Fulton, R.E. and Thompson, H.G. Evaluation of the Rapid
Rare Cell Separation and Analysis by Magnetic Sorting
Zborowski, Maciej; Chalmers, Jeffrey J.
2011-01-01
Summary The separation and or isolation of rare cells using magnetic forces is commonly used and growing in use ranging from simple sample prep for further studies to a FDA approved, clinical diagnostic test. This grown is the result of both the demand to obtain homogeneous rare cells for molecular analysis and the dramatic increases in the power of permanent magnets that even allow the separation of some unlabeled cells based on intrinsic magnetic moments, such as malaria parasite-infected red blood cells. PMID:21812408
Jin, Wen; Jiang, Hai; Liu, Yimin; Klampfl, Erica
2017-01-01
Discrete choice experiments have been widely applied to elicit behavioral preferences in the literature. In many of these experiments, the alternatives are named alternatives, meaning that they are naturally associated with specific names. For example, in a mode choice study, the alternatives can be associated with names such as car, taxi, bus, and subway. A fundamental issue that arises in stated choice experiments is whether to treat the alternatives' names as labels (that is, labeled treatment), or as attributes (that is, unlabeled treatment) in the design as well as the presentation phases of the choice sets. In this research, we investigate the impact of labeled versus unlabeled treatments of alternatives' names on the outcome of stated choice experiments, a question that has not been thoroughly investigated in the literature. Using results from a mode choice study, we find that the labeled or the unlabeled treatment of alternatives' names in either the design or the presentation phase of the choice experiment does not statistically affect the estimates of the coefficient parameters. We then proceed to measure the influence toward the willingness-to-pay (WTP) estimates. By using a random-effects model to relate the conditional WTP estimates to the socioeconomic characteristics of the individuals and the labeled versus unlabeled treatments of alternatives' names, we find that: a) Given the treatment of alternatives' names in the presentation phase, the treatment of alternatives' names in the design phase does not statistically affect the estimates of the WTP measures; and b) Given the treatment of alternatives' names in the design phase, the labeled treatment of alternatives' names in the presentation phase causes the corresponding WTP estimates to be slightly higher.
Adverse-drug-event data provided by pharmaceutical companies.
Cudny, Magdalena E; Graham, Angie S
2008-06-01
Pharmaceutical company drug information center (PCDIC) responses to queries about adverse drug events (ADEs) were studied to determine whether PCDICs search sources other than the prescribing information on the package insert (PI) and whether the PCDICs' approach differs according to whether an ADE is listed in the PI (labeled) or not (unlabeled). Companies were selected from a list of PCDICs in the Physicians' Desk Reference. One oral or injectable prescription drug from each company was selected. For each drug, a labeled ADE and an unlabeled ADE about which to query the PCDICs were randomly selected from the index of an annual publication on ADEs. The investigators telephoned the PCDICs with an open-ended inquiry about the incidence, timing, and management of the ADE as reported in the literature and the company's internal data; they clarified that the request did not concern a specific patient. Whether or not information was provided, the source searched was recorded (PI, literature, internal database), and the percentages of PCDICs that used each source for labeled and for unlabeled ADEs were analyzed. Results were obtained from 100 companies to questions about 100 drugs (200 ADEs). For ADEs overall, 80% used the PI, 50% the medical literature, and 38% internal data. For labeled versus unlabeled ADEs, respectively, the PI was used by 84% and 76%; literature, both 50%; and internal data, 35% and 41%. The PCDIC specialists referencing the PI did not always provide accurate or up-to-date information. Some specialists, when asked to query internal databases, said that was not an option. For both labeled and unlabeled ADEs, the PI was the primary source used by PCDICs to answer safety questions about their products, and internal data were the least-used source. Most resources used by PCDICs are readily available to practicing pharmacists.
Jin, Wen; Jiang, Hai; Liu, Yimin; Klampfl, Erica
2017-01-01
Discrete choice experiments have been widely applied to elicit behavioral preferences in the literature. In many of these experiments, the alternatives are named alternatives, meaning that they are naturally associated with specific names. For example, in a mode choice study, the alternatives can be associated with names such as car, taxi, bus, and subway. A fundamental issue that arises in stated choice experiments is whether to treat the alternatives’ names as labels (that is, labeled treatment), or as attributes (that is, unlabeled treatment) in the design as well as the presentation phases of the choice sets. In this research, we investigate the impact of labeled versus unlabeled treatments of alternatives’ names on the outcome of stated choice experiments, a question that has not been thoroughly investigated in the literature. Using results from a mode choice study, we find that the labeled or the unlabeled treatment of alternatives’ names in either the design or the presentation phase of the choice experiment does not statistically affect the estimates of the coefficient parameters. We then proceed to measure the influence toward the willingness-to-pay (WTP) estimates. By using a random-effects model to relate the conditional WTP estimates to the socioeconomic characteristics of the individuals and the labeled versus unlabeled treatments of alternatives’ names, we find that: a) Given the treatment of alternatives’ names in the presentation phase, the treatment of alternatives’ names in the design phase does not statistically affect the estimates of the WTP measures; and b) Given the treatment of alternatives’ names in the design phase, the labeled treatment of alternatives’ names in the presentation phase causes the corresponding WTP estimates to be slightly higher. PMID:28806764
Constrained Active Learning for Anchor Link Prediction Across Multiple Heterogeneous Social Networks
Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S.
2017-01-01
Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a=(u,v) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages. PMID:28771201
Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S
2017-08-03
Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a = ( u , v ) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages.
Learning without labeling: domain adaptation for ultrasound transducer localization.
Heimann, Tobias; Mountney, Peter; John, Matthias; Ionasec, Razvan
2013-01-01
The fusion of image data from trans-esophageal echography (TEE) and X-ray fluoroscopy is attracting increasing interest in minimally-invasive treatment of structural heart disease. In order to calculate the needed transform between both imaging systems, we employ a discriminative learning based approach to localize the TEE transducer in X-ray images. Instead of time-consuming manual labeling, we generate the required training data automatically from a single volumetric image of the transducer. In order to adapt this system to real X-ray data, we use unlabeled fluoroscopy images to estimate differences in feature space density and correct covariate shift by instance weighting. An evaluation on more than 1900 images reveals that our approach reduces detection failures by 95% compared to cross validation on the test set and improves the localization error from 1.5 to 0.8 mm. Due to the automatic generation of training data, the proposed system is highly flexible and can be adapted to any medical device with minimal efforts.
MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification
NASA Astrophysics Data System (ADS)
Lin, Daoyu; Fu, Kun; Wang, Yang; Xu, Guangluan; Sun, Xian
2017-11-01
With the development of deep learning, supervised learning has frequently been adopted to classify remotely sensed images using convolutional networks (CNNs). However, due to the limited amount of labeled data available, supervised learning is often difficult to carry out. Therefore, we proposed an unsupervised model called multiple-layer feature-matching generative adversarial networks (MARTA GANs) to learn a representation using only unlabeled data. MARTA GANs consists of both a generative model $G$ and a discriminative model $D$. We treat $D$ as a feature extractor. To fit the complex properties of remote sensing data, we use a fusion layer to merge the mid-level and global features. $G$ can produce numerous images that are similar to the training data; therefore, $D$ can learn better representations of remotely sensed images using the training data provided by $G$. The classification results on two widely used remote sensing image databases show that the proposed method significantly improves the classification performance compared with other state-of-the-art methods.
Analyzing Distributional Learning of Phonemic Categories in Unsupervised Deep Neural Networks
Räsänen, Okko; Nagamine, Tasha; Mesgarani, Nima
2017-01-01
Infants’ speech perception adapts to the phonemic categories of their native language, a process assumed to be driven by the distributional properties of speech. This study investigates whether deep neural networks (DNNs), the current state-of-the-art in distributional feature learning, are capable of learning phoneme-like representations of speech in an unsupervised manner. We trained DNNs with unlabeled and labeled speech and analyzed the activations of each layer with respect to the phones in the input segments. The analyses reveal that the emergence of phonemic invariance in DNNs is dependent on the availability of phonemic labeling of the input during the training. No increased phonemic selectivity of the hidden layers was observed in the purely unsupervised networks despite successful learning of low-dimensional representations for speech. This suggests that additional learning constraints or more sophisticated models are needed to account for the emergence of phone-like categories in distributional learning operating on natural speech. PMID:29359204
NASA Astrophysics Data System (ADS)
Nadeau, Jay; Cho, YongBin; Kühn, Jonas; Liewer, Kurt
2016-04-01
Digital holographic microscopy (DHM) is an emerging imaging technique that permits instantaneous capture of a relatively large sample volume. However, large volumes usually come at the expense of lower spatial resolution, and the technique has rarely been used with prokaryotic cells due to their small size and low contrast. In this paper we demonstrate the use of a Mach-Zehnder dual-beam instrument for imaging of labeled and unlabeled bacteria and microalgae. Spatial resolution of 0.3 micrometers is achieved, providing a sampling of several pixels across a typical prokaryotic cell. Both cellular motility and morphology are readily recorded. The use of dyes provides both amplitude and phase contrast improvement and is of use to identify cells in dense samples.
A novel histological technique for distinguishing between epithelial cells in forensic casework.
French, Claire E V; Jensen, Cynthia G; Vintiner, Susan K; Elliot, Douglas A; McGlashan, Susan R
2008-06-10
There are a number of forensic cases in which the identification of the epithelial cell type from which DNA originated would provide important probative evidence. This study aimed to develop a technique using histological staining of fixed cells to distinguish between skin, buccal and vaginal epithelium. First, 11 different stains were screened on formalin-fixed, wax-embedded cells from five women. Samples were analysed qualitatively by examining staining patterns (colour) and morphology (absence or presence of nuclei). Three of the staining methods--Dane's, Csaba's and Ayoub-Shklar--were successful in distinguishing skin epithelial cells from buccal and vaginal. Second, cells were smeared directly onto slides, fixed with one of five fixatives and stained with one of the three stains mentioned above. Methanol fixation, coupled with the Dane's staining method, specific to keratin, was the only technique that distinguished between all three cell types. Skin cells stained magenta, red and orange and lacked nuclei; buccal cells stained predominantly orange-pink with red nuclei; while vaginal cells stained bright orange with orange nuclei and a blue extracellular hue. This staining pattern in vaginal cells was consistent in samples collected from 50 women aged between 18 and 67. Identification of cell type from unlabelled micrographs by 10 trained observers showed a mean success rate of 95%. The results of this study demonstrate that histological staining may provide forensic scientists with a technique for distinguishing between skin, buccal and vaginal epithelial cells and thus would enable more conclusive analyses when investigating sexual assault cases.
Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao
2017-08-01
Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.
Fosbraey, P.; Johnson, E. S.
1980-01-01
1 Acetylcholine (ACh) stores within neurones of the myenteric plexus of the guinea-pig were labelled with [3H]-choline and the influence of unlabelled ACh, atropine, or atropine and unlabelled ACh on the electrically-evoked output of [3H]-ACh was evaluated. 2 Electrical transmural stimulation (5 Hz) of the ileum led to an increase in the output of [3H]-ACh over that released spontaneously. Superfusion with unlabelled ACh (6.8 microM) caused a marked reduction in the release of [3H]-ACh which was reversed by atropine (3.5 microM). Atropine itself had no effect on the electrically-evoked [3H]-ACh. 3 These experiments provide further evidence for the existence in the guinea-pig ileum of neuronal muscarinic receptors for ACh subserving an inhibitory role on transmitter release. PMID:7378653
Linking Sister Chromatid Cohesion to Apoptosis and Aneuploidy in the Development of Breast Cancer
2005-07-01
antibody. Jurkat cells were ZE in wheat germ extract was .4 Claved fd21 transfected with blank vectors (B), treated with • • incubated in the presence...nonisotopic) hRad21 in wheat germ extracts by FPLC fractions 13-20. Samples were analyzed on an SDS-6% PAGE gel followed by Western bloting with hRda2l C...in vitro translated unlabelled hRad21 in wheat germ extracts and assaying the cleavage in Rad21 immunoblots (Fig. 5C). The broad-spectrum caspase
Magnetic Levitation as a Platform for Competitive Protein-Ligand Binding Assays
Shapiro, Nathan D.; Soh, Siowling; Mirica, Katherine A.; Whitesides, George M.
2012-01-01
This paper describes a method based on magnetic levitation (MagLev) that is capable of indirectly measuring the binding of unlabeled ligands to unlabeled protein. We demonstrate this method by measuring the affinity of unlabeled bovine carbonic anhydrase (BCA) for a variety of ligands (most of which are benzene sulfonamide derivatives). This method utilizes porous gel beads that are functionalized with a common aryl sulfonamide ligand. The beads are incubated with BCA and allowed to reach an equilibrium state in which the majority of the immobilized ligands are bound to BCA. Since the beads are less dense than the protein, protein binding to the bead increases the overall density of the bead. This change in density can be monitored using MagLev. Transferring the beads to a solution containing no protein creates a situation where net protein efflux from the bead is thermodynamically favorable. The rate at which protein leaves the bead for the solution can be calculated from the rate at which the levitation height of the bead changes. If another small molecule ligand of BCA is dissolved in the solution, the rate of protein efflux is accelerated significantly. This paper develops a reaction-diffusion (RD) model to explain both this observation, and the physical-organic chemistry that underlies it. Using this model, we calculate the dissociation constants of several unlabeled ligands from BCA, using plots of levitation height versus time. Notably, although this method requires no electricity, and only a single piece of inexpensive equipment, it can measure accurately the binding of unlabeled proteins to small molecules over a wide range of dissociation constants (Kd’s within the range of ~ 10 nM to 100 µM are measured easily). Assays performed using this method generally can be completed within a relatively short time period (20 minutes – 2 hours). A deficiency of this system is that it is not, in its present form, applicable to proteins with molecular weight greater than approximately 65 kDa. PMID:22686324
Magnetic levitation as a platform for competitive protein-ligand binding assays.
Shapiro, Nathan D; Soh, Siowling; Mirica, Katherine A; Whitesides, George M
2012-07-17
This paper describes a method based on magnetic levitation (MagLev) that is capable of indirectly measuring the binding of unlabeled ligands to unlabeled protein. We demonstrate this method by measuring the affinity of unlabeled bovine carbonic anhydrase (BCA) for a variety of ligands (most of which are benzene sulfonamide derivatives). This method utilizes porous gel beads that are functionalized with a common aryl sulfonamide ligand. The beads are incubated with BCA and allowed to reach an equilibrium state in which the majority of the immobilized ligands are bound to BCA. Since the beads are less dense than the protein, protein binding to the bead increases the overall density of the bead. This change in density can be monitored using MagLev. Transferring the beads to a solution containing no protein creates a situation where net protein efflux from the bead is thermodynamically favorable. The rate at which protein leaves the bead for the solution can be calculated from the rate at which the levitation height of the bead changes. If another small molecule ligand of BCA is dissolved in the solution, the rate of protein efflux is accelerated significantly. This paper develops a reaction-diffusion (RD) model to explain both this observation, and the physical-organic chemistry that underlies it. Using this model, we calculate the dissociation constants of several unlabeled ligands from BCA, using plots of levitation height versus time. Notably, although this method requires no electricity, and only a single piece of inexpensive equipment, it can measure accurately the binding of unlabeled proteins to small molecules over a wide range of dissociation constants (K(d) values within the range from ~10 nM to 100 μM are measured easily). Assays performed using this method generally can be completed within a relatively short time period (20 min-2 h). A deficiency of this system is that it is not, in its present form, applicable to proteins with molecular weight greater than approximately 65 kDa.
Amino acid selective unlabeling for sequence specific resonance assignments in proteins
Krishnarjuna, B.; Jaipuria, Garima; Thakur, Anushikha
2010-01-01
Sequence specific resonance assignment constitutes an important step towards high-resolution structure determination of proteins by NMR and is aided by selective identification and assignment of amino acid types. The traditional approach to selective labeling yields only the chemical shifts of the particular amino acid being selected and does not help in establishing a link between adjacent residues along the polypeptide chain, which is important for sequential assignments. An alternative approach is the method of amino acid selective ‘unlabeling’ or reverse labeling, which involves selective unlabeling of specific amino acid types against a uniformly 13C/15N labeled background. Based on this method, we present a novel approach for sequential assignments in proteins. The method involves a new NMR experiment named, {12COi–15Ni+1}-filtered HSQC, which aids in linking the 1HN/15N resonances of the selectively unlabeled residue, i, and its C-terminal neighbor, i + 1, in HN-detected double and triple resonance spectra. This leads to the assignment of a tri-peptide segment from the knowledge of the amino acid types of residues: i − 1, i and i + 1, thereby speeding up the sequential assignment process. The method has the advantage of being relatively inexpensive, applicable to 2H labeled protein and can be coupled with cell-free synthesis and/or automated assignment approaches. A detailed survey involving unlabeling of different amino acid types individually or in pairs reveals that the proposed approach is also robust to misincorporation of 14N at undesired sites. Taken together, this study represents the first application of selective unlabeling for sequence specific resonance assignments and opens up new avenues to using this methodology in protein structural studies. Electronic supplementary material The online version of this article (doi:10.1007/s10858-010-9459-z) contains supplementary material, which is available to authorized users. PMID:21153044
High Class-Imbalance in pre-miRNA Prediction: A Novel Approach Based on deepSOM.
Stegmayer, Georgina; Yones, Cristian; Kamenetzky, Laura; Milone, Diego H
2017-01-01
The computational prediction of novel microRNA within a full genome involves identifying sequences having the highest chance of being a miRNA precursor (pre-miRNA). These sequences are usually named candidates to miRNA. The well-known pre-miRNAs are usually only a few in comparison to the hundreds of thousands of potential candidates to miRNA that have to be analyzed, which makes this task a high class-imbalance classification problem. The classical way of approaching it has been training a binary classifier in a supervised manner, using well-known pre-miRNAs as positive class and artificially defining the negative class. However, although the selection of positive labeled examples is straightforward, it is very difficult to build a set of negative examples in order to obtain a good set of training samples for a supervised method. In this work, we propose a novel and effective way of approaching this problem using machine learning, without the definition of negative examples. The proposal is based on clustering unlabeled sequences of a genome together with well-known miRNA precursors for the organism under study, which allows for the quick identification of the best candidates to miRNA as those sequences clustered with known precursors. Furthermore, we propose a deep model to overcome the problem of having very few positive class labels. They are always maintained in the deep levels as positive class while less likely pre-miRNA sequences are filtered level after level. Our approach has been compared with other methods for pre-miRNAs prediction in several species, showing effective predictivity of novel miRNAs. Additionally, we will show that our approach has a lower training time and allows for a better graphical navegability and interpretation of the results. A web-demo interface to try deepSOM is available at http://fich.unl.edu.ar/sinc/web-demo/deepsom/.
An online semi-supervised brain-computer interface.
Gu, Zhenghui; Yu, Zhuliang; Shen, Zhifang; Li, Yuanqing
2013-09-01
Practical brain-computer interface (BCI) systems should require only low training effort for the user, and the algorithms used to classify the intent of the user should be computationally efficient. However, due to inter- and intra-subject variations in EEG signal, intermittent training/calibration is often unavoidable. In this paper, we present an online semi-supervised P300 BCI speller system. After a short initial training (around or less than 1 min in our experiments), the system is switched to a mode where the user can input characters through selective attention. In this mode, a self-training least squares support vector machine (LS-SVM) classifier is gradually enhanced in back end with the unlabeled EEG data collected online after every character input. In this way, the classifier is gradually enhanced. Even though the user may experience some errors in input at the beginning due to the small initial training dataset, the accuracy approaches that of fully supervised method in a few minutes. The algorithm based on LS-SVM and its sequential update has low computational complexity; thus, it is suitable for online applications. The effectiveness of the algorithm has been validated through data analysis on BCI Competition III dataset II (P300 speller BCI data). The performance of the online system was evaluated through experimental results on eight healthy subjects, where all of them achieved the spelling accuracy of 85 % or above within an average online semi-supervised learning time of around 3 min.
Human semi-supervised learning.
Gibson, Bryan R; Rogers, Timothy T; Zhu, Xiaojin
2013-01-01
Most empirical work in human categorization has studied learning in either fully supervised or fully unsupervised scenarios. Most real-world learning scenarios, however, are semi-supervised: Learners receive a great deal of unlabeled information from the world, coupled with occasional experiences in which items are directly labeled by a knowledgeable source. A large body of work in machine learning has investigated how learning can exploit both labeled and unlabeled data provided to a learner. Using equivalences between models found in human categorization and machine learning research, we explain how these semi-supervised techniques can be applied to human learning. A series of experiments are described which show that semi-supervised learning models prove useful for explaining human behavior when exposed to both labeled and unlabeled data. We then discuss some machine learning models that do not have familiar human categorization counterparts. Finally, we discuss some challenges yet to be addressed in the use of semi-supervised models for modeling human categorization. Copyright © 2013 Cognitive Science Society, Inc.
Nikfarjam, Azadeh; Sarker, Abeed; O'Connor, Karen; Ginn, Rachel; Gonzalez, Graciela
2015-05-01
Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media. We introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique. ADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance. It is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Semi-supervised tracking of extreme weather events in global spatio-temporal climate datasets
NASA Astrophysics Data System (ADS)
Kim, S. K.; Prabhat, M.; Williams, D. N.
2017-12-01
Deep neural networks have been successfully applied to solve problem to detect extreme weather events in large scale climate datasets and attend superior performance that overshadows all previous hand-crafted methods. Recent work has shown that multichannel spatiotemporal encoder-decoder CNN architecture is able to localize events in semi-supervised bounding box. Motivated by this work, we propose new learning metric based on Variational Auto-Encoders (VAE) and Long-Short-Term-Memory (LSTM) to track extreme weather events in spatio-temporal dataset. We consider spatio-temporal object tracking problems as learning probabilistic distribution of continuous latent features of auto-encoder using stochastic variational inference. For this, we assume that our datasets are i.i.d and latent features is able to be modeled by Gaussian distribution. In proposed metric, we first train VAE to generate approximate posterior given multichannel climate input with an extreme climate event at fixed time. Then, we predict bounding box, location and class of extreme climate events using convolutional layers given input concatenating three features including embedding, sampled mean and standard deviation. Lastly, we train LSTM with concatenated input to learn timely information of dataset by recurrently feeding output back to next time-step's input of VAE. Our contribution is two-fold. First, we show the first semi-supervised end-to-end architecture based on VAE to track extreme weather events which can apply to massive scaled unlabeled climate datasets. Second, the information of timely movement of events is considered for bounding box prediction using LSTM which can improve accuracy of localization. To our knowledge, this technique has not been explored neither in climate community or in Machine Learning community.
Maier, Barbara; Vogeser, Michael
2013-04-01
Isotope dilution LC-MS/MS methods used in the clinical laboratory typically involve multi-point external calibration in each analytical series. Our aim was to test the hypothesis that determination of target analyte concentrations directly derived from the relation of the target analyte peak area to the peak area of a corresponding stable isotope labelled internal standard compound [direct isotope dilution analysis (DIDA)] may be not inferior to conventional external calibration with respect to accuracy and reproducibility. Quality control samples and human serum pools were analysed in a comparative validation protocol for cortisol as an exemplary analyte by LC-MS/MS. Accuracy and reproducibility were compared between quantification either involving a six-point external calibration function, or a result calculation merely based on peak area ratios of unlabelled and labelled analyte. Both quantification approaches resulted in similar accuracy and reproducibility. For specified analytes, reliable analyte quantification directly derived from the ratio of peak areas of labelled and unlabelled analyte without the need for a time consuming multi-point calibration series is possible. This DIDA approach is of considerable practical importance for the application of LC-MS/MS in the clinical laboratory where short turnaround times often have high priority.
Soler, Stephan; Rittore, Cécile; Touitou, Isabelle; Philibert, Laurent
2011-02-20
From the wide range of methods currently available for genotyping, we wished to identify a quick, reliable and affordable approach for routine use in our laboratory for LTA+252 C>T SNP screening. We set up and compared three genotyping methods for SNP detection: restriction fragment length polymorphism (RFLP), tetra primer amplification refractory mutation system PCR (TPAP) and unlabeled probe melting analysis (UPMA). The SNP model used was LTA+252 C>T, a cytokine gene polymorphism that has been associated with response to treatment in rheumatoid arthritis. The study was performed using 46 samples from healthy Caucasian volunteers. Allele and genotype distribution was similar to that previously described in the same population. All three genotyping methods showed good reproducibility and are suitable for a medium scale throughput molecular platform. UPMA was the most cost effective, reliable and safe method since it required the shortest technician time, could be performed in a single closed tube and involved automatic data analysis. This work is the first to compare these three genotyping techniques and provides evidence for UPMA being the method of choice for LTA+252 C>T SNP genotyping. Copyright © 2010 Elsevier B.V. All rights reserved.
CNNdel: Calling Structural Variations on Low Coverage Data Based on Convolutional Neural Networks
2017-01-01
Many structural variations (SVs) detection methods have been proposed due to the popularization of next-generation sequencing (NGS). These SV calling methods use different SV-property-dependent features; however, they all suffer from poor accuracy when running on low coverage sequences. The union of results from these tools achieves fairly high sensitivity but still produces low accuracy on low coverage sequence data. That is, these methods contain many false positives. In this paper, we present CNNdel, an approach for calling deletions from paired-end reads. CNNdel gathers SV candidates reported by multiple tools and then extracts features from aligned BAM files at the positions of candidates. With labeled feature-expressed candidates as a training set, CNNdel trains convolutional neural networks (CNNs) to distinguish true unlabeled candidates from false ones. Results show that CNNdel works well with NGS reads from 26 low coverage genomes of the 1000 Genomes Project. The paper demonstrates that convolutional neural networks can automatically assign the priority of SV features and reduce the false positives efficaciously. PMID:28630866
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
Mwangi, Benson; Soares, Jair C; Hasan, Khader M
2014-10-30
Neuroimaging machine learning studies have largely utilized supervised algorithms - meaning they require both neuroimaging scan data and corresponding target variables (e.g. healthy vs. diseased) to be successfully 'trained' for a prediction task. Noticeably, this approach may not be optimal or possible when the global structure of the data is not well known and the researcher does not have an a priori model to fit the data. We set out to investigate the utility of an unsupervised machine learning technique; t-distributed stochastic neighbour embedding (t-SNE) in identifying 'unseen' sample population patterns that may exist in high-dimensional neuroimaging data. Multimodal neuroimaging scans from 92 healthy subjects were pre-processed using atlas-based methods, integrated and input into the t-SNE algorithm. Patterns and clusters discovered by the algorithm were visualized using a 2D scatter plot and further analyzed using the K-means clustering algorithm. t-SNE was evaluated against classical principal component analysis. Remarkably, based on unlabelled multimodal scan data, t-SNE separated study subjects into two very distinct clusters which corresponded to subjects' gender labels (cluster silhouette index value=0.79). The resulting clusters were used to develop an unsupervised minimum distance clustering model which identified 93.5% of subjects' gender. Notably, from a neuropsychiatric perspective this method may allow discovery of data-driven disease phenotypes or sub-types of treatment responders. Copyright © 2014 Elsevier B.V. All rights reserved.
Rapid NMR Assignments of Proteins by Using Optimized Combinatorial Selective Unlabeling.
Dubey, Abhinav; Kadumuri, Rajashekar Varma; Jaipuria, Garima; Vadrevu, Ramakrishna; Atreya, Hanudatta S
2016-02-15
A new approach for rapid resonance assignments in proteins based on amino acid selective unlabeling is presented. The method involves choosing a set of multiple amino acid types for selective unlabeling and identifying specific tripeptides surrounding the labeled residues from specific 2D NMR spectra in a combinatorial manner. The methodology directly yields sequence specific assignments, without requiring a contiguously stretch of amino acid residues to be linked, and is applicable to deuterated proteins. We show that a 2D [(15) N,(1) H] HSQC spectrum with two 2D spectra can result in ∼50 % assignments. The methodology was applied to two proteins: an intrinsically disordered protein (12 kDa) and the 29 kDa (268 residue) α-subunit of Escherichia coli tryptophan synthase, which presents a challenging case with spectral overlaps and missing peaks. The method can augment existing approaches and will be useful for applications such as identifying active-site residues involved in ligand binding, phosphorylation, or protein-protein interactions, even prior to complete resonance assignments. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sanchez, J M; Cacace, V; Kusnier, C F; Nelson, R; Rubashkin, A A; Iserovich, P; Fischbarg, J
2016-08-01
We have presented prior evidence suggesting that fluid transport results from electro-osmosis at the intercellular junctions of the corneal endothelium. Such phenomenon ought to drag other extracellular solutes. We have investigated this using fluorescein-Na2 as an extracellular marker. We measured unidirectional fluxes across layers of cultured human corneal endothelial (HCE) cells. SV-40-transformed HCE layers were grown to confluence on permeable membrane inserts. The medium was DMEM with high glucose and no phenol red. Fluorescein-labeled medium was placed either on the basolateral or the apical side of the inserts; the other side carried unlabeled medium. The inserts were held in a CO2 incubator for 1 h (at 37 °C), after which the entire volume of the unlabeled side was collected. After that, label was placed on the opposite side, and the corresponding paired sample was collected after another hour. Fluorescein counts were determined with a (Photon Technology) DeltaScan fluorometer (excitation 380 nm; emission 550 nm; 2 nm bwth). Samples were read for 60 s. The cells utilized are known to transport fluid from the basolateral to the apical side, just as they do in vivo in several species. We used 4 inserts for influx and efflux (total: 20 1-h periods). We found a net flux of fluorescein from the basolateral to the apical side. The flux ratio was 1.104 ± 0.056. That difference was statistically significant (p = 0.00006, t test, paired samples). The endothelium has a definite restriction at the junctions. Hence, an asymmetry in unidirectional fluxes cannot arise from osmosis, and can only point instead to paracellular solvent drag. We suggest, once more, that such drag is due to electro-osmotic coupling at the paracellular junctions.
Kovatcheva-Datchary, Petia; Egert, Markus; Maathuis, Annet; Rajilić-Stojanović, Mirjana; de Graaf, Albert A; Smidt, Hauke; de Vos, Willem M; Venema, Koen
2009-04-01
Carbohydrates, including starches, are an important energy source for humans, and are known for their interactions with the microbiota in the digestive tract. Largely, those interactions are thought to promote human health. Using 16S ribosomal RNA (rRNA)-based stable isotope probing (SIP), we identified starch-fermenting bacteria under human colon-like conditions. To the microbiota of the TIM-2 in vitro model of the human colon 7.4 g l(-1) of [U-(13)C]-starch was added. RNA extracted from lumen samples after 0 (control), 2, 4 and 8 h was subjected to density-gradient ultracentrifugation. Terminal-restriction fragment length polymorphism (T-RFLP) fingerprinting and phylogenetic analyses of the labelled and unlabelled 16S rRNA suggested populations related to Ruminococcus bromii, Prevotella spp. and Eubacterium rectale to be involved in starch metabolism. Additionally, 16S rRNA related to that of Bifidobacterium adolescentis was abundant in all analysed fractions. While this might be due to the enrichment of high-GC RNA in high-density fractions, it could also indicate an active role in starch fermentation. Comparison of the T-RFLP fingerprints of experiments performed with labelled and unlabelled starch revealed Ruminococcus bromii as the primary degrader in starch fermentation in the studied model, as it was found to solely predominate in the labelled fractions. LC-MS analyses of the lumen and dialysate samples showed that, for both experiments, starch fermentation primarily yielded acetate, butyrate and propionate. Integration of molecular and metabolite data suggests metabolic cross-feeding in the system, where populations related to Ruminococcus bromii are the primary starch degrader, while those related to Prevotella spp., Bifidobacterium adolescentis and Eubacterium rectale might be further involved in the trophic chain.
Free fatty acid metabolism of the human heart at rest
Most, Albert S.; Brachfeld, Norman; Gorlin, Richard; Wahren, John
1969-01-01
Myocardial substrate metabolism was studied in 13 subjects at the time of diagnostic cardiac catheterization by means of palmitic acid-14C infusion with arterial and coronary sinus sampling. Two subjects were considered free of cardiac pathology and all, with one exception, demonstrated lactate extraction across the portion of heart under study. Data for this single lactate-producing subject were treated separately. The fractional extraction of 14C-labeled free fatty acids (FFA) (44.4±9.5%) was nearly twice that of unlabeled FFA (23.2±7.8%) and raised the possibility of release of FFA into the coronary sinus. FFA uptake, based on either the arterial minus coronary sinus concentration difference or the FFA-14C fractional extraction, was directly proportional to the arterial FFA concentration. Gas-liquid chromatography failed to demonstrate selective handling of any individual FFA by the heart. Fractional oxidation of FFA was 53.5±12.7%, accounting for 53.2±14.4% of the heart's oxygen consumption while nonlipid substrates accounted for an additional 30.0±17.3%. Determinations of both labeled and unlabeled triglycerides suggested utilization of this substrate by the fasting human heart. Direct measurement of FFA fractional oxidation as well as FFA uptake, exclusive of possible simultaneous FFA release, would appear necessary in studies concerned with human myocardial FFA metabolism. PMID:5794244
Bumbaca, Daniela; Xiang, Hong; Boswell, C Andrew; Port, Ruediger E; Stainton, Shannon L; Mundo, Eduardo E; Ulufatu, Sheila; Bagri, Anil; Theil, Frank-Peter; Fielder, Paul J; Khawli, Leslie A; Shen, Ben-Quan
2012-01-01
BACKGROUND AND PURPOSE Neuropilin-1 (NRP1) is a VEGF receptor that is widely expressed in normal tissues and is involved in tumour angiogenesis. MNRP1685A is a rodent and primate cross-binding human monoclonal antibody against NRP1 that exhibits inhibition of tumour growth in NPR1-expressing preclinical models. However, widespread NRP1 expression in normal tissues may affect MNRP1685A tumour uptake. The objective of this study was to assess MNRP1685A biodistribution in tumour-bearing mice to understand the relationships between dose, non-tumour tissue uptake and tumour uptake. EXPERIMENTAL APPROACH Non-tumour-bearing mice were given unlabelled MNRP1685A at 10 mg·kg−1. Tumour-bearing mice were given 111In-labelled MNRP1685A along with increasing amounts of unlabelled antibody. Blood and tissues were collected from all animals to determine drug concentration (unlabelled) or radioactivity level (radiolabelled). Some animals were imaged using single photon emission computed tomography – X-ray computed tomography. KEY RESULTS MNRP1685A displayed faster serum clearance than pertuzumab, indicating that target binding affected MNRP1685A clearance. I.v. administration of 111In-labelled MNRP1685A to tumour-bearing mice yielded minimal radioactivity in the plasma and tumour, but high levels in the lungs and liver. Co-administration of unlabelled MNRP1685A with the radiolabelled antibody was able to competitively block lungs and liver radioactivity uptake in a dose-dependent manner while augmenting plasma and tumour radioactivity levels. CONCLUSIONS AND IMPLICATIONS These results indicate that saturation of non-tumour tissue uptake is required in order to achieve tumour uptake and acceptable exposure to antibody. Utilization of a rodent and primate cross-binding antibody allows for translation of these results to clinical settings. PMID:22074316
Binding, uptake, and release of nicotine by human gingival fibroblasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanes, P.J.; Schuster, G.S.; Lubas, S.
1991-02-01
Previous studies of the effects of nicotine on fibroblasts have reported an altered morphology and attachment of fibroblasts to substrates and disturbances in protein synthesis and secretion. This altered functional and attachment response may be associated with changes in the cell membrane resulting from binding of the nicotine, or to disturbances in cell metabolism as a result of high intracellular levels of nicotine. The purpose of the present study, therefore, was to (1) determine whether gingival fibroblasts bound nicotine and if any binding observed was specific or non-specific in nature; (2) determine whether gingival fibroblasts internalized nicotine, and if so,more » at what rate; (3) determine whether gingival fibroblasts also released nicotine back into the extracellular environment; and (4) if gingival fibroblasts release nicotine intact or as a metabolite. Cultures of gingival fibroblasts were prepared from gingival connective tissue biopsies. Binding was evaluated at 4{degree}C using a mixture of {sup 3}H-nicotine and unlabeled nicotine. Specific binding was calculated as the difference between {sup 3}H-nicotine bound in the presence and absence of unlabeled nicotine. The cells bound 1.44 (+/- 0.42) pmols/10(6) cells in the presence of unlabeled nicotine and 1.66 (+/- 0.55) pmols/10(6) cells in the absence of unlabeled nicotine. The difference was not significant. Uptake of nicotine was measured at 37{degree}C after treating cells with {sup 3}H-nicotine for time periods up to 4 hours. Uptake in pmols/10(6) cells was 4.90 (+/- 0.34) at 15 minutes, 8.30 (+/- 0.75) at 30 minutes, 12.28 (+/- 2.62) at 1 hour and 26.31 (+/- 1.15) at 4 hours.« less
A Semisupervised Support Vector Machines Algorithm for BCI Systems
Qin, Jianzhao; Li, Yuanqing; Sun, Wei
2007-01-01
As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.
2016-01-01
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685
Waytowich, Nicholas R; Lawhern, Vernon J; Bohannon, Addison W; Ball, Kenneth R; Lance, Brent J
2016-01-01
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.
Holographic deep learning for rapid optical screening of anthrax spores
Jo, YoungJu; Park, Sangjin; Jung, JaeHwang; Yoon, Jonghee; Joo, Hosung; Kim, Min-hyeok; Kang, Suk-Jo; Choi, Myung Chul; Lee, Sang Yup; Park, YongKeun
2017-01-01
Establishing early warning systems for anthrax attacks is crucial in biodefense. Despite numerous studies for decades, the limited sensitivity of conventional biochemical methods essentially requires preprocessing steps and thus has limitations to be used in realistic settings of biological warfare. We present an optical method for rapid and label-free screening of Bacillus anthracis spores through the synergistic application of holographic microscopy and deep learning. A deep convolutional neural network is designed to classify holographic images of unlabeled living cells. After training, the network outperforms previous techniques in all accuracy measures, achieving single-spore sensitivity and subgenus specificity. The unique “representation learning” capability of deep learning enables direct training from raw images instead of manually extracted features. The method automatically recognizes key biological traits encoded in the images and exploits them as fingerprints. This remarkable learning ability makes the proposed method readily applicable to classifying various single cells in addition to B. anthracis, as demonstrated for the diagnosis of Listeria monocytogenes, without any modification. We believe that our strategy will make holographic microscopy more accessible to medical doctors and biomedical scientists for easy, rapid, and accurate point-of-care diagnosis of pathogens. PMID:28798957
Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks
NASA Astrophysics Data System (ADS)
Karpatne, A.; Kumar, V.
2017-12-01
Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.
Learning feature representations with a cost-relevant sparse autoencoder.
Längkvist, Martin; Loutfi, Amy
2015-02-01
There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.
Active Learning with Irrelevant Examples
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; Wagstaff, Kiri L.; Burl, Michael
2006-01-01
Active learning algorithms attempt to accelerate the learning process by requesting labels for the most informative items first. In real-world problems, however, there may exist unlabeled items that are irrelevant to the user's classification goals. Queries about these points slow down learning because they provide no information about the problem of interest. We have observed that when irrelevant items are present, active learning can perform worse than random selection, requiring more time (queries) to achieve the same level of accuracy. Therefore, we propose a novel approach, Relevance Bias, in which the active learner combines its default selection heuristic with the output of a simultaneously trained relevance classifier to favor items that are likely to be both informative and relevant. In our experiments on a real-world problem and two benchmark datasets, the Relevance Bias approach significantly improved the learning rate of three different active learning approaches.
Unlabelled advertorials in Slovenian life-style press: a study of the promotion of health products.
Kovacic, Melita Poler; Erjavec, Karmen; Stular, Katarina
2011-01-01
The paper analyses unlabelled advertorials about health products in four life-style magazines and three daily newspapers' life-style supplements in Slovenia. Based on 250 hours of observing the production practice, 20 in-depth interviews with the main participants and a textual analysis of 247 advertorials, supported by three detailed case studies, the process of unlabelled advertorial production was unveiled, reasons for their production explained and their discursive elements of promotion uncovered. Despite their typical news-like appearance, advertorials focus on a product's positive characteristics only and represent an oversimplified viewpoint on health, primarily oriented towards the interest of the pharmaceutical industry. In advertorials, readers are instructed in healthy living and caring about their health through buying the promoted product. No particular differences were found between the magazines and quality dailies' supplements, indicating that the advertorial practice has become a common part of the Slovenian press media scene. The outburst of advertorials in Slovenia is outstanding due to the lack of historical democracy, problems with the supervision of legal transgressions, the small media and advertising market, economic downturns and the financial weakness of the media.
Jing, Li; Amster, I Jonathan
2009-10-15
Offline high performance liquid chromatography combined with matrix assisted laser desorption and Fourier transform ion cyclotron resonance mass spectrometry (HPLC-MALDI-FTICR/MS) provides the means to rapidly analyze complex mixtures of peptides, such as those produced by proteolytic digestion of a proteome. This method is particularly useful for making quantitative measurements of changes in protein expression by using (15)N-metabolic labeling. Proteolytic digestion of combined labeled and unlabeled proteomes produces complex mixtures that with many mass overlaps when analyzed by HPLC-MALDI-FTICR/MS. A significant challenge to data analysis is the matching of pairs of peaks which represent an unlabeled peptide and its labeled counterpart. We have developed an algorithm and incorporated it into a compute program which significantly accelerates the interpretation of (15)N metabolic labeling data by automating the process of identifying unlabeled/labeled peak pairs. The algorithm takes advantage of the high resolution and mass accuracy of FTICR mass spectrometry. The algorithm is shown to be able to successfully identify the (15)N/(14)N peptide pairs and calculate peptide relative abundance ratios in highly complex mixtures from the proteolytic digest of a whole organism protein extract.
A Simple Label Switching Algorithm for Semisupervised Structural SVMs.
Balamurugan, P; Shevade, Shirish; Sundararajan, S
2015-10-01
In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.
The Livermore Brain: Massive Deep Learning Networks Enabled by High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Barry Y.
The proliferation of inexpensive sensor technologies like the ubiquitous digital image sensors has resulted in the collection and sharing of vast amounts of unsorted and unexploited raw data. Companies and governments who are able to collect and make sense of large datasets to help them make better decisions more rapidly will have a competitive advantage in the information era. Machine Learning technologies play a critical role for automating the data understanding process; however, to be maximally effective, useful intermediate representations of the data are required. These representations or “features” are transformations of the raw data into a form where patternsmore » are more easily recognized. Recent breakthroughs in Deep Learning have made it possible to learn these features from large amounts of labeled data. The focus of this project is to develop and extend Deep Learning algorithms for learning features from vast amounts of unlabeled data and to develop the HPC neural network training platform to support the training of massive network models. This LDRD project succeeded in developing new unsupervised feature learning algorithms for images and video and created a scalable neural network training toolkit for HPC. Additionally, this LDRD helped create the world’s largest freely-available image and video dataset supporting open multimedia research and used this dataset for training our deep neural networks. This research helped LLNL capture several work-for-others (WFO) projects, attract new talent, and establish collaborations with leading academic and commercial partners. Finally, this project demonstrated the successful training of the largest unsupervised image neural network using HPC resources and helped establish LLNL leadership at the intersection of Machine Learning and HPC research.« less
A clustering algorithm for sample data based on environmental pollution characteristics
NASA Astrophysics Data System (ADS)
Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun
2015-04-01
Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.
Heimann, Tobias; Mountney, Peter; John, Matthias; Ionasec, Razvan
2014-12-01
The fusion of image data from trans-esophageal echography (TEE) and X-ray fluoroscopy is attracting increasing interest in minimally-invasive treatment of structural heart disease. In order to calculate the needed transformation between both imaging systems, we employ a discriminative learning (DL) based approach to localize the TEE transducer in X-ray images. The successful application of DL methods is strongly dependent on the available training data, which entails three challenges: (1) the transducer can move with six degrees of freedom meaning it requires a large number of images to represent its appearance, (2) manual labeling is time consuming, and (3) manual labeling has inherent errors. This paper proposes to generate the required training data automatically from a single volumetric image of the transducer. In order to adapt this system to real X-ray data, we use unlabeled fluoroscopy images to estimate differences in feature space density and correct covariate shift by instance weighting. Two approaches for instance weighting, probabilistic classification and Kullback-Leibler importance estimation (KLIEP), are evaluated for different stages of the proposed DL pipeline. An analysis on more than 1900 images reveals that our approach reduces detection failures from 7.3% in cross validation on the test set to zero and improves the localization error from 1.5 to 0.8mm. Due to the automatic generation of training data, the proposed system is highly flexible and can be adapted to any medical device with minimal efforts. Copyright © 2014 Elsevier B.V. All rights reserved.
Adaptive sequential Bayesian classification using Page's test
NASA Astrophysics Data System (ADS)
Lynch, Robert S., Jr.; Willett, Peter K.
2002-03-01
In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.
Brown, Laurie M; Casamassimo, Paul S; Griffen, Ann; Tatakis, Dimitris
2006-01-01
This study assessed the anti-calculus benefit of Crest Dual Action Whitening Toothpaste in gastrostomy (GT) children compared to a control anti-caries dentifrice. A double-blind randomized crossover design was used to compare the two dentifrices. A convenience sample of 24 GT subjects, 3-12 years old, was given a consensus baseline Volpe-Manhold Index calculus score by 2 trained examiners, followed by a dental prophylaxis to remove all calculus. Each child was randomly assigned to either study or control dentifrice groups. Caregivers brushed subjects' teeth twice daily with the unlabelled dentifrice for at least 45 seconds. Calculus was scored at 8 weeks (+/- 1 week) by the same investigators. Subjects then had a prophylaxis and received the alternative dentifrice. Subjects returned 8 weeks (+/- 1 week) later for final calculus scoring. The study dentifrice significantly reduced supragingival calculus from baseline by 58% compared to control dentifrice (p<0.005 need exact p-value unless it is <.001; maybe it's reported in the paper). Calculus levels decreased by 68% over the study duration, irrespective of dentifrice. ANOVA found no significant differences in calculus scores based on gender, race, history of reflux, aspiration pneumonia, or oral intake of food. Calculus was significantly related to history of aspiration pneumonia (p<0.05 need exact p-value here). Crest Dual Action Whitening Toothpaste was effective and better than anti-caries control dentifrice in reducing calculus in GT children.
Screening unlabeled DNA targets with randomly ordered fiber-optic gene arrays.
Steemers, F J; Ferguson, J A; Walt, D R
2000-01-01
We have developed a randomly ordered fiber-optic gene array for rapid, parallel detection of unlabeled DNA targets with surface immobilized molecular beacons (MB) that undergo a conformational change accompanied by a fluorescence change in the presence of a complementary DNA target. Microarrays are prepared by randomly distributing MB-functionalized 3-microm diameter microspheres in an array of wells etched in a 500-microm diameter optical imaging fiber. Using several MBs, each designed to recognize a different target, we demonstrate the selective detection of genomic cystic fibrosis related targets. Positional registration and fluorescence response monitoring of the microspheres was performed using an optical encoding scheme and an imaging fluorescence microscope system.
Method for the Simultaneous Quantitation of Apolipoprotein E Isoforms using Tandem Mass Spectrometry
Wildsmith, Kristin R.; Han, Bomie; Bateman, Randall J.
2009-01-01
Using Apolipoprotein E (ApoE) as a model protein, we developed a protein isoform analysis method utilizing Stable Isotope Labeling Tandem Mass Spectrometry (SILT MS). ApoE isoforms are quantitated using the intensities of the b and y ions of the 13C-labeled tryptic isoform-specific peptides versus unlabeled tryptic isoform-specific peptides. The ApoE protein isoform analysis using SILT allows for the simultaneous detection and relative quantitation of different ApoE isoforms from the same sample. This method provides a less biased assessment of ApoE isoforms compared to antibody-dependent methods, and may lead to a better understanding of the biological differences between isoforms. PMID:19653990
Black Nitrogen as a source for the built-up of microbial biomass in soils
NASA Astrophysics Data System (ADS)
López-Martín, María; Milter, Anja; Knicker, Heike
2016-04-01
In areas with frequent wildfires, soil organic nitrogen (SON) is sequestered in pyrogenic organic matter (PyOM) due to heat-induced transformation of proteinaceous compounds into N-heterocycles, i.e. pyrrole, imidazole and indole compounds. These newly formed structures, known as Black Nitrogen (BN), have been assumed to be hardly degradable by microorganisms, thus being efficiently sequestered from the N cycle. On the other hand, a previous study showed that nitrogen of BN can be used by plants for the built-up of their biomass (de la Rosa and Knicker 2011). Thus, BN may play an important role as an N source during the recovery of the forest after a fire event. In order to obtain a more profound understanding of the role of BN within the N cycle in soils, we studied the bioavailability and incorporation of N derived from PyOM into microbial amino acids. For that, pots with soil from a burnt and an unburnt Cambisol located under a Mediterranean forest were covered with different amendments. The toppings were mixtures of unlabeled KNO3 with 15N labeled grass or 15N-labeled PyOM from burned grass and K15NO3 mixed with unlabeled grass material or PyOM. The pots were kept in the greenhouse under controlled conditions for 16 months and were sampled after 0.5, 1, 5, 8 and 16 months. From all samples the amino acids were extracted after hydrolysis (6 M HCl, 22 h, 110 °C) and quantified via gas chromatography mass spectrometry (GC/MS). The fate of 15N was followed by isotopic ratio mass spectrometry (IRMS). The results show that the contribution of extractable amino acids to total soil organic matter was always higher in the unburnt than in the burnt soil. However, with ongoing incubation their amount decreased. Already after 0.5 months, some PyOM-derived 15N was incorporated into the extractable amino acids and the amount increased with experiment time. Since this can only occur after prior microbial degradation of PyOM our results clearly support a lower biochemical recalcitrance of N-rich charred residues than formerly assumed. Our experiment demonstrated further that aside from being incorporated into plants (de la Rosa and Knicker 2011) the release PyOM-N can also be used for the built-up of new microbial biomass. ACKNOWLEDGEMENT The Ministerio de Economía y competitividad de España, the European Regional Development Fund (ERDF) and the IHSS Training Award are acknowledged for financial support of the project (CGL2009-10557) and the travel and stay of the María López Martín at the Helmholtz Center for Environmental Research UFZ. The latter is thanked greatly for hosting the awardee. REFERENCES de la Rosa, J. M. and H. Knicker (2011). "Bioavailability of N released from N-rich pyrogenic organic matter: An incubation study." Soil Biology and Biochemistry 43(12): 2368-2373.
Jia, Yun-Fang; Gao, Chun-Ying; He, Jia; Feng, Dao-Fu; Xing, Ke-Li; Wu, Ming; Liu, Yang; Cai, Wen-Sheng; Feng, Xi-Zeng
2012-08-21
Multi biomarkers' assays are of great significance in clinical diagnosis. A label-free multi tumor markers' parallel detection system was proposed based on a light addressable potentiometric sensor (LAPS). Arrayed LAPS chips with basic structure of Si(3)N(4)-SiO(2)-Si were prepared on silicon wafers, and the label-free parallel detection system for this component was developed with user friendly controlling interfaces. Then the l-3,4-dihydroxyphenyl-alanine (L-Dopa) hydrochloric solution was used to initiate the surface of LAPS. The L-Dopa immobilization state was investigated by the theoretical calculation. L-Dopa initiated LAPS' chip was biofunctionalized respectively by the antigens and antibodies of four tumor markers, α-fetoprotein (AFP), carcinoembryonic antigen (CEA), cancer antigen 19-9 (CA19-9) and Ferritin. Then unlabeled antibodies and antigens of these four biomarkers were detected by the proposed detection systems. Furthermore physical and measuring principles in this system were described, and qualitative understanding for experimental data were given. The measured response ranges were compared with their clinical cutoff values, and sensitivities were calculated by OriginLab. The results indicate that this bioinitiated LAPS based label-free detection system may offer a new choice for the realization of unlabeled multi tumor markers' clinical assay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadam, K.K.; Drew, S.W.
1986-01-01
The biodegradation of lignin by fungi was studied in shake flasks using /sup 14/C-labeled kraft lignin and in a deep-tank fermentor using unlabeled kraft lignin. Among the fungi screened, A. fumigatus - isolated in our laboratories - was most potent in lignin biotransformation. Dialysis-type fermentation, designed to study possible accumulation of low MW lignin-derived products, showed no such accumulation. Recalcitrant carbohydrates like microcrystalline cellulose supported higher lignolytic activity than easily metabolized carbohydrates like cellobiose. An assay developed to distinguish between CO/sub 2/ evolved from lignin and carbohydrate substrates demonstrated no stoichiometric correlation between the metabolism of the two cosubstrates. Themore » submerged fermentations with unlabeled liqnin are difficult to monitor since chemical assays do not give accurate and true results. Lignolytic efficiencies that allowed monitoring of such fermentations were defined. Degraded lignins were clearly superior to C. versicolor in all aspects of lignin degradation; A fumigatus brought about substantial demethoxylation and dehydroxylation, whereas C. versicolor degraded lignins closely resembled undegraded kraft lignin. There was a good agreement among the different indices of lignin degradation, namely, /sup 14/CO evolution, OCH/sub 3/ loss, OH loss, and monomer and dimer yield after permanganate oxidation.« less
Zhang, Jie; Chen, Yuewen; Shao, Yong; Wu, Qi; Guan, Ming; Zhang, Wei; Wan, Jun; Yu, Bo
2012-01-01
Background. TNFα-induced protein 3 (TNFAIP3) interacting with protein 1 (TNIP1) acts as a negative regulator of NF-κB and plays an important role in maintaining the homeostasis of immune system. A recent genome-wide association study (GWAS) showed that the polymorphism of TNIP1 was associated with the disease risk of SLE in Caucasian. In this study, we investigated whether the association of TNIP1 with SLE was replicated in Chinese population. Methods. The association of TNIP1 SNP rs7708392 (G/C) was determined by high resolution melting (HRM) analysis with unlabeled probe in 285 SLE patients and 336 healthy controls. Results. A new SNP rs79937737 located on 5 bp upstream of rs7708392 was discovered during the HRM analysis. No association of rs7708392 or rs79937737 with the disease risk of SLE was found. Furthermore, rs7708392 and rs79937737 were in weak linkage disequilibrium (LD). Hypotypes analysis of the two SNPs also showed no association with SLE in Chinese population. Conclusions. High resolution melting analysis with unlabeled probes proves to be a powerful and efficient genotyping method for identifying and screening SNPs. No association of rs7708392 or rs79937737 with the disease risk of SLE was observed in Chinese population. PMID:22852072
Quantification of Superparamagnetic Iron Oxide (SPIO)-labeled Cells Using MRI
Rad, Ali M; Arbab, Ali S; Iskander, ASM; Jiang, Quan; Soltanian-Zadeh, Hamid
2015-01-01
Purpose To show the feasibility of using magnetic resonance imaging (MRI) to quantify superparamagnetic iron oxide (SPIO)-labeled cells. Materials and Methods Lymphocytes and 9L rat gliosarcoma cells were labeled with Ferumoxides-Protamine Sulfate complex (FE-PRO). Cells were labeled efficiently (more than 95%) and iron concentration inside each cell was measured by spectrophotometry (4.77-30.21 picograms). Phantom tubes containing different number of labeled or unlabeled cells as well as different concentrations of FE-PRO were made. In addition, labeled and unlabeled cells were injected into fresh and fixed rat brains. Results Cellular viability and proliferation of labeled and unlabeled cells were shown to be similar. T2-weighted images were acquired using 7 T and 3 T MRI systems and R2 maps of the tubes containing cells, free FE-PRO, and brains were made. There was a strong linear correlation between R2 values and labeled cell numbers but the regression lines were different for the lymphocytes and gliosarcoma cells. Similarly, there was strong correlation between R2 values and free iron. However, free iron had higher R2 values than the labeled cells for the same concentration of iron. Conclusion Our data indicated that in vivo quantification of labeled cells can be done by careful consideration of different factors and specific control groups. PMID:17623892
Semi-supervised learning for ordinal Kernel Discriminant Analysis.
Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C
2016-12-01
Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bumbaca, Daniela; Xiang, Hong; Boswell, C Andrew; Port, Ruediger E; Stainton, Shannon L; Mundo, Eduardo E; Ulufatu, Sheila; Bagri, Anil; Theil, Frank-Peter; Fielder, Paul J; Khawli, Leslie A; Shen, Ben-Quan
2012-05-01
Neuropilin-1 (NRP1) is a VEGF receptor that is widely expressed in normal tissues and is involved in tumour angiogenesis. MNRP1685A is a rodent and primate cross-binding human monoclonal antibody against NRP1 that exhibits inhibition of tumour growth in NPR1-expressing preclinical models. However, widespread NRP1 expression in normal tissues may affect MNRP1685A tumour uptake. The objective of this study was to assess MNRP1685A biodistribution in tumour-bearing mice to understand the relationships between dose, non-tumour tissue uptake and tumour uptake. Non-tumour-bearing mice were given unlabelled MNRP1685A at 10 mg·kg(-1) . Tumour-bearing mice were given (111) In-labelled MNRP1685A along with increasing amounts of unlabelled antibody. Blood and tissues were collected from all animals to determine drug concentration (unlabelled) or radioactivity level (radiolabelled). Some animals were imaged using single photon emission computed tomography - X-ray computed tomography. MNRP1685A displayed faster serum clearance than pertuzumab, indicating that target binding affected MNRP1685A clearance. I.v. administration of (111) In-labelled MNRP1685A to tumour-bearing mice yielded minimal radioactivity in the plasma and tumour, but high levels in the lungs and liver. Co-administration of unlabelled MNRP1685A with the radiolabelled antibody was able to competitively block lungs and liver radioactivity uptake in a dose-dependent manner while augmenting plasma and tumour radioactivity levels. These results indicate that saturation of non-tumour tissue uptake is required in order to achieve tumour uptake and acceptable exposure to antibody. Utilization of a rodent and primate cross-binding antibody allows for translation of these results to clinical settings. © 2011 Genentech Inc. British Journal of Pharmacology © 2011 The British Pharmacological Society.
Phaeton, Rébécca; Wang, Xing Guo; Einstein, Mark H.; Goldberg, Gary L.; Casadevall, Arturo; Dadachova, Ekaterina
2009-01-01
Background Human Papillomavirus (HPV) infection is considered a necessary step for the development of cervical cancer and >95% of all cervical cancers have detectable HPV sequences. We have recently demonstrated the efficacy of radioimmunotherapy (RIT) which targeted viral oncoprotein E6 in treatment of experimental cervical cancer We hypothesized that pre-treatment of tumor cells with various agents which cause cell death and/or elevation of E6 levels would increase the accumulation of radiolabeled antibodies to E6 in cervical tumors. Methods HPV-16 positive CasKi cells were treated in vitro with up to 6 Gy of external radiation, or proteasome inhibitor MG-132 or unlabeled anti-E6 antibody C1P5 and cell death was assessed. Biodistribution of 188Rhenium (188Re)-labeled C1P5 antibody was performed in both control and radiation MG-132 treated CasKi tumor-bearing nude mice. Results . 188Re-C1P5 antibody demonstrated tumor specificity and very low uptake and fast clearance from the major organs. The amount of tumor uptake was enhanced by MG-132 but was unaffected by pre-treatment with radiation. In addition, in vitro studies demonstrated an unanticipated effect of unlabeled antibody on the amount of cell death, a finding that was suggested by our previous in vivo studies in CasKi tumor model. Conclusion We demonstrated that pre-treatment of cervical tumors with proteasome inhibitor MG-132 and with unlabeled antibody to E6 can serve as a means to generate non-viable cancer cells and to elevate the levels of target oncoproteins in the cells for increasing the accumulation of targeted radiolabeled antibodies in tumors. These results favor further development of RIT of cervical cancers targeting viral antigens. PMID:20127955
Detection of eardrum abnormalities using ensemble deep learning approaches
NASA Astrophysics Data System (ADS)
Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.
2018-02-01
In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).
DNA stable-isotope probing (DNA-SIP).
Dunford, Eric A; Neufeld, Josh D
2010-08-02
DNA stable-isotope probing (DNA-SIP) is a powerful technique for identifying active microorganisms that assimilate particular carbon substrates and nutrients into cellular biomass. As such, this cultivation-independent technique has been an important methodology for assigning metabolic function to the diverse communities inhabiting a wide range of terrestrial and aquatic environments. Following the incubation of an environmental sample with stable-isotope labelled compounds, extracted nucleic acid is subjected to density gradient ultracentrifugation and subsequent gradient fractionation to separate nucleic acids of differing densities. Purification of DNA from cesium chloride retrieves labelled and unlabelled DNA for subsequent molecular characterization (e.g. fingerprinting, microarrays, clone libraries, metagenomics). This JoVE video protocol provides visual step-by-step explanations of the protocol for density gradient ultracentrifugation, gradient fractionation and recovery of labelled DNA. The protocol also includes sample SIP data and highlights important tips and cautions that must be considered to ensure a successful DNA-SIP analysis.
NASA Astrophysics Data System (ADS)
Langer, Gregor; Buchegger, Bianca; Jacak, Jaroslaw; Pfeffer, Karoline; Wohlfarth, Sven; Hannesschläger, Günther; Klar, Thomas A.; Berer, Thomas
2018-02-01
In this paper, multimodal optical-resolution frequency-domain photoacoustic and fluorescence scanning microscopy is presented on labeled and unlabeled cells. In many molecules, excited electrons relax radiatively and non-radiatively, leading to fluorescence and photoacoustic signals, respectively. Both signals can then be detected simultaneously. There also exist molecules, e.g. hemoglobin, which do not exhibit fluorescence, but provide photoacoustic signals solely. Other molecules, especially fluorescent dyes, preferentially exhibit fluorescence. The fluorescence quantum yield of a molecule and with it the strength of photoacoustic and fluorescence signals depends on the local environment, e.g. on the pH. Therefore, the local distribution of the simultaneously recorded photoacoustic and fluorescence signals may be used in order to obtain information about the local chemistry.
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; ...
2016-09-22
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less
FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection.
Noto, Keith; Brodley, Carla; Slonim, Donna
2012-01-01
Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called "normal" instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach.
FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection
Brodley, Carla; Slonim, Donna
2011-01-01
Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called “normal” instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach. PMID:22639542
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less
The influence of ferucarbotran on the chondrogenesis of human mesenchymal stem cells
Henning, Tobias D; Sutton, Elizabeth J; Kim, Anne; Golovko, Daniel; Horvai, Andrew; Ackerman, Larry; Sennino, Barbara; McDonald, Donald; Lotz, Jeffrey; Daldrup-Link, Heike E
2010-01-01
For in vivo applications of magnetically labeled stem cells, biological effects of the labeling procedure have to be precluded. This study evaluates the effect of different Ferucarbotran cell labeling protocols on chondrogenic differentiation of human mesenchymal stem cells (hMSC) as well as their implications for MR imaging. hMSC were labeled with Ferucarbotran using various protocols: Cells were labeled with 100μg Fe/ml for 4h and 18h and additional samples were cultured for 6 or 12 days after the 18-hour labeling. Supplementary samples were labeled by transfection with protamine sulfate. Iron uptake was quantified by ICP-spectrometry and labeled cells were investigated by transmission electron microscopy and by immunostaining for ferucarbotran. The differentiation potential of labeled cells was compared to unlabeled controls by staining with alcian blue and hematoxylin & eosin, then quantified by measurements of glucosaminoglycans (GAG). Contrast agent effect at 3T was investigated on day 1 and day 14 of chondrogenic differentiation by measuring signal-to-noise ratios on T2-SE and T2*-GE-sequences. Iron uptake was significant for all labeling protocols (p< 0.05). The uptake was highest after transfection with protamine sulfate (25.65 ± 3.96 pg/cell) and lowest at an incubation time of 4h without transfection (3.21 ± 0.21 pg/cell). While chondrogenic differentiation was decreased using all labeling protocols, the decrease in GAG synthesis was not significant after labeling for 4h without transfection. After labeling by simple incubation, chondrogenesis was found to be dose-dependent. MR imaging showed markedly lower SNR values of all labeled cells compared to the unlabeled controls. This contrast agent effect persisted for 14 days and the duration of differentiation. Magnetic labeling of hMSC with ferucarbotran inhibits chondrogenesis in a dose-dependent manner when using simple incubation techniques. When decreasing the incubation time to 4h, inhibition of chondrogenesis was not significant. PMID:19670250
NASA Astrophysics Data System (ADS)
Fatehi, Moslem; Asadi, Hooshang H.
2017-04-01
In this study, the application of a transductive support vector machine (TSVM), an innovative semi-supervised learning algorithm, has been proposed for mapping the potential drill targets at a detailed exploration stage. The semi-supervised learning method is a hybrid of supervised and unsupervised learning approach that simultaneously uses both training and non-training data to design a classifier. By using the TSVM algorithm, exploration layers at the Dalli porphyry Cu-Au deposit in the central Iran were integrated to locate the boundary of the Cu-Au mineralization for further drilling. By applying this algorithm on the non-training (unlabeled) and limited training (labeled) Dalli exploration data, the study area was classified in two domains of Cu-Au ore and waste. Then, the results were validated by the earlier block models created, using the available borehole and trench data. In addition to TSVM, the support vector machine (SVM) algorithm was also implemented on the study area for comparison. Thirty percent of the labeled exploration data was used to evaluate the performance of these two algorithms. The results revealed 87 percent correct recognition accuracy for the TSVM algorithm and 82 percent for the SVM algorithm. The deepest inclined borehole, recently drilled in the western part of the Dalli deposit, indicated that the boundary of Cu-Au mineralization, as identified by the TSVM algorithm, was only 15 m off from the actual boundary intersected by this borehole. According to the results of the TSVM algorithm, six new boreholes were suggested for further drilling at the Dalli deposit. This study showed that the TSVM algorithm could be a useful tool for enhancing the mineralization zones and consequently, ensuring a more accurate drill hole planning.
Heat Management Strategies for Solid-state NMR of Functional Proteins
Fowler, Daniel J.; Harris, Michael J.; Thompson, Lynmarie K.
2012-01-01
Modern solid-state NMR methods can acquire high-resolution protein spectra for structure determination. However, these methods use rapid sample spinning and intense decoupling fields that can heat and denature the protein being studied. Here we present a strategy to avoid destroying valuable samples. We advocate first creating a sacrificial sample, which contains unlabeled protein (or no protein) in buffer conditions similar to the intended sample. This sample is then doped with the chemical shift thermometer Sm2Sn2O7. We introduce a pulse scheme called TCUP (for Temperature Calibration Under Pulseload) that can characterize the heating of this sacrificial sample rapidly, under a variety of experimental conditions, and with high temporal resolution. Sample heating is discussed with respect to different instrumental variables such as spinning speed, decoupling strength and duration, and cooling gas flow rate. The effects of different sample preparation variables are also discussed, including ionic strength, the inclusion of cryoprotectants, and the physical state of the sample (i.e. liquid, solid, or slurry). Lastly, we discuss probe detuning as a measure of sample thawing that does not require retuning the probe or using chemical shift thermometer compounds. Use of detuning tests and chemical shift thermometers with representative sample conditions makes it possible to maximize the efficiency of the NMR experiment while retaining a functional sample. PMID:22868258
Wang, Sa; He, Hai-bo; Xiao, Shu-zhang; Wang, Jun-zhi; Bai, Cai-hong; Wei, Na; Zou, Kun
2014-08-01
It is well known that fluorescent labeling has recently become a major research tool in molecular and cellular biology for demonstrating therapeutic mechanisms and metabolic pathways. However, few studies have reported the use of fluorescent labeling of natural products. We recently explored the boron 2-(2'-pyridyl) imidazole (BOPIM) derivative analogs, which are highly fluorescent, non-aggregated, and nontoxic. In the present study, the natural product oleanolic acid (OA) was functionalized and labeled with BOPIM, thus yielding a highly fluorescent probe, the comparison of cardioprotective effects of labeled and unlabeled OAs with BOPIM on primary neonatal rat cardiomyocytes with hypoxia/reoxygenation (H/R) injury were investigated. Pretreatment with OA and BOPIM-OA significantly prevented the H/R induced cell death in primary neonatal rat cardiomyocytes. However, BOPIM exhibited no improvements on the H/R injury cardiomyocytes, and which were similar to those of the H/R group. The results of comparison of cardioprotective effects between labeled and unlabeled OAs with BOPIM showed that introducing the BOPIM chromophore did not make a difference with H/R injury cardiomyocytes. BOPIM chromophore is a suitable probe for investigating the pharmacological mechanisms of natural products. Copyright © 2014 Institute of Pharmacology, Polish Academy of Sciences. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
Early steps of supported bilayer formation probed by single vesicle fluorescence assays.
Johnson, Joseph M; Ha, Taekjip; Chu, Steve; Boxer, Steven G
2002-01-01
We have developed a single vesicle assay to study the mechanisms of supported bilayer formation. Fluorescently labeled, unilamellar vesicles (30-100 nm diameter) were first adsorbed to a quartz surface at low enough surface concentrations to visualize single vesicles. Fusion and rupture events during the bilayer formation, induced by the subsequent addition of unlabeled vesicles, were detected by measuring two-color fluorescence signals simultaneously. Lipid-conjugated dyes monitored the membrane fusion while encapsulated dyes reported on the vesicle rupture. Four dominant pathways were observed, each exhibiting characteristic two-color fluorescence signatures: 1) primary fusion, in which an unlabeled vesicle fuses with a labeled vesicle on the surface, is signified by the dequenching of the lipid-conjugated dyes followed by rupture and final merging into the bilayer; 2) simultaneous fusion and rupture, in which a labeled vesicle on the surface ruptures simultaneously upon fusion with an unlabeled vesicle; 3) no dequenching, in which loss of fluorescence signal from both dyes occur simultaneously with the final merger into the bilayer; and 4) isolated rupture (pre-ruptured vesicles), in which a labeled vesicle on the surface spontaneously undergoes content loss, a process that occurs with high efficiency in the presence of a high concentration of Texas Red-labeled lipids. Vesicles that have undergone content loss appear to be more fusogenic than intact vesicles. PMID:12496104
Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adal, Kedir M.; Sidebe, Desire; Ali, Sharib
2014-01-07
Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using onlymore » few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.« less
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Generalization Analysis of Fredholm Kernel Regularized Classifiers.
Gong, Tieliang; Xu, Zongben; Chen, Hong
2017-07-01
Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers. We prove that the corresponding learning rate can achieve [Formula: see text] ([Formula: see text] is the number of labeled samples) in a limiting case. In addition, a representer theorem is provided for the proposed regularized scheme, which underlies its applications.
McCutchen-Maloney, Sandra L.
2002-01-01
DNA mutation binding proteins alone and as chimeric proteins with nucleases are used with solid supports to detect DNA sequence variations, DNA mutations and single nucleotide polymorphisms. The solid supports may be flow cytometry beads, DNA chips, glass slides or DNA dips sticks. DNA molecules are coupled to solid supports to form DNA-support complexes. Labeled DNA is used with unlabeled DNA mutation binding proteins such at TthMutS to detect DNA sequence variations, DNA mutations and single nucleotide length polymorphisms by binding which gives an increase in signal. Unlabeled DNA is utilized with labeled chimeras to detect DNA sequence variations, DNA mutations and single nucleotide length polymorphisms by nuclease activity of the chimera which gives a decrease in signal.
Three learning phases for radial-basis-function networks.
Schwenker, F; Kestler, H A; Palm, G
2001-05-01
In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.
Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou
2017-01-01
Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition. PMID:28467371
Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou
2017-05-03
Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition.
Occupational safety and health status of medical laboratories in Kajiado County, Kenya.
Tait, Fridah Ntinyari; Mburu, Charles; Gikunju, Joseph
2018-01-01
Despite the increasing interest in Occupational Safety and Health (OSH), seldom studies are available on OSH in medical laboratories from developing countries in general although a high number of injuries occur without proper documentation. It is estimated that every day 6,300 people die as a result of occupational accidents or work-related diseases resulting in over 2.3 million deaths per year. Medical laboratories handle a wide range of materials, potentially dangerous pathogenic agents and exposes health workers to numerous potential hazards. This study evaluated the status of OSH in medical laboratories in Kajiado County, Kenya. The objectives included establishment of biological, chemical and physical hazards; reviewing medical laboratories control measures; and enumerating factors hindering implementation of good practices in OSH. This was a cross-sectional descriptive study research design. Observation check lists, interview schedules and structured questionnaires were used. The study was carried out in 108 medical laboratories among 204 sampled respondents. Data was analysed using statistical package for social science (SPSS) 20 software. The commonest type of hazards in medical laboratories include; bacteria (80%) for Biological hazards; handling un-labelled and un-marked chemicals (38.2%) for chemical hazards; and laboratory equipment's dangerously placed (49.5%) for Physical hazards. According to Pearson's Product Moment Correlation analysis, not-wearing personal protective equipment's was statistically associated with exposure to hazards. Individual control measures were statistically significant at 0.01 significance level. Only 65.1% of the factors influencing implementation of OSH in medical laboratories were identified. Training has the highest contribution to good OSH practices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ARIMURA, A.; SATO, H.; KUMASAKA, T.
1973-11-01
Repeated injections of synthetic LH -- RH decapeptide, adsorbed on polyvinylpyrrolidone and emulsified with complete Freund's adjuvant, resulted in the production of a specific antiserum to LH-- RH in two of three rabbits. The animals that produced this antiserum showed a reduction of pituitary LH content and marked atrophy of the testes. The antiserum-antibody complex was detected by the complement flxation test. The antiserum was capable of binding /sup 125/I- labeled LH--RH. After iodination of LHRH (using /sup 125/I and either the chloramine T or lactoperoxidase method) separation of the iodination products on CMC yielded three main peaks of radioactivity:more » The first was free iodide, the second was labeled peptide with low immunoreactivity, and the third was immunoreactive peptide. This 3rd peak consisted of two or three subpeaks; the leading subpeak(s) were more readily bound by antiserum than the trailing one(s). Binding of these fractions to antiserum was increased in the presence of small amounts of unlabeled LH--RH (a phenomenon called paradoxical binding or hock effect) but inhibited by larger amounts. Both the augmentation and the inhibition effects were dose-related, allowing the development of two different radioimmunoassay (RIA) systems for LH--RH. An ordinary (coinpetitive) type of RIA was developed in which a small amount (0.31 ng/assay tube) of unlabeled LH-- RH was added to the labeled peptide. This saturated the antiserum's capacity for paradoxical binding, so that further addition of LH-- RH (from 0.04 to 2.5 ng/ tube) inhibited binding of labeled LH--RH. The assay developed using paradoxical binding omitted the premixing of labeled and unlabeled LH--RH; in this assay addition of very small amounts (0.5 to 310 pg) of unlabeled LH--RH to the assay tubes increased the amount of label bound to antiserum and allowed construction of a parabolic curve of positive slope when B/T was plotted against arithmetic dose. The assays seem to be highly specific for LH--RH although both polymers and degradation products of LH--RH appeared to have some immunoreactivity.« less
Hur, E E; Edwards, R H; Rommer, E; Zaborszky, L
2009-12-29
The basal forebrain (BF) comprises morphologically and functionally heterogeneous cell populations, including cholinergic and non-cholinergic corticopetal neurons that are implicated in sleep-wake modulation, learning, memory and attention. Several studies suggest that glutamate may be among inputs affecting cholinergic corticopetal neurons but such inputs have not been demonstrated unequivocally. We examined glutamatergic axon terminals in the sublenticular substantia innominata in rats using double-immunolabeling for vesicular glutamate transporters (Vglut1 and Vglut2) and choline acetyltransferase (ChAT) at the electron microscopic level. In a total surface area of 30,000 microm(2), we classified the pre- and postsynaptic elements of 813 synaptic boutons. Vglut1 and Vglut2 boutons synapsed with cholinergic dendrites, and occasionally Vglut2 axon terminals also synapsed with cholinergic cell bodies. Vglut1 terminals formed synapses with unlabeled dendrites and spines with equal frequency, while Vglut2 boutons were mainly in synaptic contact with unlabeled dendritic shafts and occasionally with unlabeled spines. In general, Vglut1 boutons contacted more distal dendritic compartments than Vglut2 boutons. About 21% of all synaptic boutons (n=347) detected in tissue that was stained for Vglut1 and ChAT were positive for Vglut1, and 14% of the Vglut1 synapses were made on cholinergic profiles. From separate cases stained for Vglut2 and ChAT, 35% of all synaptic boutons (n=466) were positive for Vglut2, and 23% of the Vglut2 synapses were made on cholinergic profiles. On average, Vglut1 boutons were significantly smaller than Vglut2 synaptic boutons. The Vglut2 boutons that synapsed cholinergic profiles tended to be larger than the Vglut2 boutons that contacted unlabeled, non-cholinergic postsynaptic profiles. The presence of two different subtypes of Vgluts, the size differences of the Vglut synaptic boutons, and their preference for different postsynaptic targets suggest that the action of glutamate on BF neurons is complex and may arise from multiple afferent sources.
Hur, Elizabeth E.; Edwards, Robert H.; Rommer, Erzsebet; Zaborszky, Laszlo
2009-01-01
The basal forebrain (BF) comprises morphologically and functionally heterogeneous cell populations, including cholinergic and non-cholinergic corticopetal neurons that are implicated in sleep-wake modulation, learning, memory and attention. Several studies suggest that glutamate may be among inputs affecting cholinergic corticopetal neurons but such inputs have not been demonstrated unequivocally. We examined glutamatergic axon terminals in the sublenticular substantia innominata in rats using double-immunolabeling for vesicular glutamate transporters (Vglut1 and Vglut2) and choline acetyltransferase (ChAT) at the electron microscopic level. In a total surface area of 30,000 μm2, we classified the pre- and postsynaptic elements of 813 synaptic boutons. Vglut1 and Vglut2 boutons synapsed with cholinergic dendrites, and occasionally Vglut2 axon terminals also synapsed with cholinergic cell bodies. Vglut1 terminals formed synapses with unlabeled dendrites and spines with equal frequency, while Vglut2 boutons were mainly in synaptic contact with unlabeled dendritic shafts and occasionally with unlabeled spines. In general, Vglut1 boutons contacted more distal dendritic compartments than Vglut2 boutons. About 21% of all synaptic boutons (n=347) detected in tissue that was stained for Vglut1 and ChAT were positive for Vglut1, and 14% of the Vglut1 synapses were made on cholinergic profiles. From separate cases stained for Vglut2 and ChAT, 35% of all synaptic boutons (n=466) were positive for Vglut2, and 23% of the Vglut2 synapses were made on cholinergic profiles. On average, Vglut1 boutons were significantly smaller than Vglut2 synaptic boutons. The Vglut2 boutons that synapsed cholinergic profiles tended to be larger than the Vglut2 boutons that contacted unlabeled, non-cholinergic postsynaptic profiles. The presence of two different subtypes of Vgluts, the size differences of the Vglut synaptic boutons, and their preference for different postsynaptic targets suggest that the action of glutamate on BF neurons is complex and may arise from multiple afferent sources. PMID:19778580
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.
OpenCL based machine learning labeling of biomedical datasets
NASA Astrophysics Data System (ADS)
Amoros, Oscar; Escalera, Sergio; Puig, Anna
2011-03-01
In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and labeling speeds.
Classification of ROTSE Variable Stars using Machine Learning
NASA Astrophysics Data System (ADS)
Wozniak, P. R.; Akerlof, C.; Amrose, S.; Brumby, S.; Casperson, D.; Gisler, G.; Kehoe, R.; Lee, B.; Marshall, S.; McGowan, K. E.; McKay, T.; Perkins, S.; Priedhorsky, W.; Rykoff, E.; Smith, D. A.; Theiler, J.; Vestrand, W. T.; Wren, J.; ROTSE Collaboration
2001-12-01
We evaluate several Machine Learning algorithms as potential tools for automated classification of variable stars. Using the ROTSE sample of ~1800 variables from a pilot study of 5% of the whole sky, we compare the effectiveness of a supervised technique (Support Vector Machines, SVM) versus unsupervised methods (K-means and Autoclass). There are 8 types of variables in the sample: RR Lyr AB, RR Lyr C, Delta Scuti, Cepheids, detached eclipsing binaries, contact binaries, Miras and LPVs. Preliminary results suggest a very high ( ~95%) efficiency of SVM in isolating a few best defined classes against the rest of the sample, and good accuracy ( ~70-75%) for all classes considered simultaneously. This includes some degeneracies, irreducible with the information at hand. Supervised methods naturally outperform unsupervised methods, in terms of final error rate, but unsupervised methods offer many advantages for large sets of unlabeled data. Therefore, both types of methods should be considered as promising tools for mining vast variability surveys. We project that there are more than 30,000 periodic variables in the ROTSE-I data base covering the entire local sky between V=10 and 15.5 mag. This sample size is already stretching the time capabilities of human analysts.
A convenient Simple Method for Synthesis of Meta-iodobenzylguanidine (MIBG).
Sheikholislam, Zahra; Soleimani, Zohreh; Moghimi, Abolghasem; Shahhosseini, Soraya
2013-01-01
Radioiodinated meta-iodobenzylguanidine (MIBG) is one of the important radiopharmaceuticals in Nuclear Medicine. [(123/131)I] MIBG is used for imaging of Adrenal medulla, studying heart sympathetic nerves, treatment of pheochromacytoma and neuroblastoma. For clinical application, radioiodinated MIBG is prepared through isotopic exchange method, which includes replacement of radioactive iodine in a nucleophilic substitution reaction with cold iodine ((127)I). The unlabelled MIBG hemisulfate is synthesized by the procedure described by Wieland et al. (1980). The availability of a more practical and cost-effective procedure for MIBG preparation encouraged us to study the MIBG synthesis methods. In this study the preparation of MIBG through different methods were evaluated and a new method, which is one step, simple and cost-effective is introduced. The method has ability to be scaled up for production of unlabelled MIBG.
Boulton, David W.; Kasichayanula, Sreeneeranj; Keung, Chi Fung (Anther); Arnold, Mark E.; Christopher, Lisa J.; Xu, Xiaohui (Sophia); LaCreta, Frank
2013-01-01
Aim To determine the absolute oral bioavailability (Fp.o.) of saxagliptin and dapagliflozin using simultaneous intravenous 14C‐microdose/therapeutic oral dosing (i.v.micro + oraltherap). Methods The Fp.o. values of saxagliptin and dapagliflozin were determined in healthy subjects (n = 7 and 8, respectively) following the concomitant administration of single i.v. micro doses with unlabelled oraltherap doses. Accelerator mass spectrometry and liquid chromatography‐tandem mass spectrometry were used to quantify the labelled and unlabelled drug, respectively. Results The geometric mean point estimates (90% confidence interval) Fp.o. values for saxagliptin and dapagliflozin were 50% (48, 53%) and 78% (73, 83%), respectively. The i.v.micro had similar pharmacokinetics to oraltherap. Conclusions Simultaneous i.v.micro + oraltherap dosing is a valuable tool to assess human absolute bioavailability. PMID:22823746
A high-throughput label-free nanoparticle analyser.
Fraikin, Jean-Luc; Teesalu, Tambet; McKenney, Christopher M; Ruoslahti, Erkki; Cleland, Andrew N
2011-05-01
Synthetic nanoparticles and genetically modified viruses are used in a range of applications, but high-throughput analytical tools for the physical characterization of these objects are needed. Here we present a microfluidic analyser that detects individual nanoparticles and characterizes complex, unlabelled nanoparticle suspensions. We demonstrate the detection, concentration analysis and sizing of individual synthetic nanoparticles in a multicomponent mixture with sufficient throughput to analyse 500,000 particles per second. We also report the rapid size and titre analysis of unlabelled bacteriophage T7 in both salt solution and mouse blood plasma, using just ~1 × 10⁻⁶ l of analyte. Unexpectedly, in the native blood plasma we discover a large background of naturally occurring nanoparticles with a power-law size distribution. The high-throughput detection capability, scalable fabrication and simple electronics of this instrument make it well suited for diverse applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedynyshyn, J.P.
The opioid binding characteristics of the rat (PAG) and the signal transduction mechanisms of the opioid receptors were examined with in vitro radioligand binding, GTPase, adenylyl cyclase, and inositol phosphate assays. The nonselective ligand {sup 3}H-ethylketocyclazocine (EKC), the {mu} and {delta} selective ligand {sup 3}H-(D-Ala{sup 2}, D-Leu{sup 5}) enkephalin (DADLE), the {mu} selective ligand {sup 3}H-(D-Ala{sup 2}, N-methyl Phe{sup 4}, Glyol{sup 5}) enkephalin (DAGO), and the {delta} selective ligand {sup 3}H-(D-Pen{sup 2}, D-Pen{sup 5}) enkephalin (DPDPE) were separately used as tracer ligands to label opioid binding sites in rat PAG enriched P{sub 2} membrane in competition with unlabeled DADLE, DAGO,more » DPDPE, or the {kappa} selective ligand trans-3,4-dichloro-N-(2-(1-pyrrolidinyl)cyclohexyl)benzeneacetamide, methane sulfonate, hydrate (U50, 488H). Only {mu} selective high affinity opioid binding was observed. No high affinity {delta} or {kappa} selective binding was detected. {sup 3}H-DAGO was used as a tracer ligand to label {mu} selective high affinity opioid binding sites in PAG enriched P{sub 2} membrane in competition with unlabeled {beta}-endorphin, dynorphin A (1-17), BAM-18, methionine enkephalin, dynorphin A (1-8), and leucine enkephalin. Of these endogenous opioid peptides only those with previously reported high affinity {mu} type opioid binding activity competed with {sup 3}H-DAGO for binding sites in rat PAG enriched P{sub 2} membrane with affinities similar to that of unlabeled DAGO.« less
Transformation of toluene and benzene by mixed methanogenic cultures.
Grbić-Galić, D; Vogel, T M
1987-01-01
The aromatic hydrocarbons toluene and benzene were anaerobically transformed by mixed methanogenic cultures derived from ferulic acid-degrading sewage sludge enrichments. In most experiments, toluene or benzene was the only semicontinuously supplied carbon and energy source in the defined mineral medium. No exogenous electron acceptors other than CO2 were present. The cultures were fed 1.5 to 30 mM unlabeled or 14C-labeled aromatic substrates (ring-labeled toluene and benzene or methyl-labeled toluene). Gas production from unlabeled substrates and 14C activity distribution in products from the labeled substrates were monitored over a period of 60 days. At least 50% of the substrates were converted to CO2 and methane (greater than 60%). A high percentage of 14CO2 was recovered from the methyl group-labeled toluene, suggesting nearly complete conversion of the methyl group to CO2 and not to methane. However, a low percentage of 14CO2 was produced from ring-labeled toluene or from benzene, indicating incomplete conversion of the ring carbon to CO2. Anaerobic transformation pathways for unlabeled toluene and benzene were studied with the help of gas chromatography-mass spectrometry. The intermediates detected are consistent with both toluene and benzene degradation via initial oxidation by ring hydroxylation or methyl oxidation (toluene), which would result in the production of phenol, cresols, or aromatic alcohol. Additional reactions, such as demethylation and ring reduction, are also possible. Tentative transformation sequences based upon the intermediates detected are discussed. PMID:3105454
Protein recycling in growing rabbits: contribution of microbial lysine to amino acid metabolism.
Belenguer, Alvaro; Balcells, Joaquim; Guada, Jose A; Decoux, Marc; Milne, Eric
2005-11-01
To study the absorption of microbial lysine in growing rabbits, a labelled diet (supplemented with (15)NH4Cl) was administered to six animals (group ISOT); a control group (CTRL, four rabbits) received a similar, but unlabelled, diet. Diets were administered for 30 d. An additional group of six animals were fed the unlabelled diet for 20 d and then the labelled diet for 10 d while wearing a neck collar to avoid caecotrophy (group COLL), in order to discriminate it from direct intestinal absorption. At day 30 animals were slaughtered and caecal bacteria and liver samples taken. The (15)N enrichment in amino acids of caecal bacteria and liver were determined by GC-combustion/isotope ratio MS. Lysine showed a higher enrichment in caecal microflora (0.925 atom% excess, APE) than liver (0.215 APE) in group ISOT animals, confirming the double origin of body lysine: microbial and dietary. The COLL group showed a much lower enrichment in tissue lysine (0.007 (se 0.0029) APE for liver). Any enrichment in the latter animals was due to direct absorption of microbial lysine along the digestive tract, since recycling of microbial protein (caecotrophy) was avoided. In such conditions liver enrichment was low, indicating a small direct intestinal absorption. From the ratio of [(15)N]lysine enrichment between liver and bacteria the contribution of microbes to body lysine was estimated at 23 %, with 97 % of this arising through caecotrophy. Absorption of microbial lysine through caecotrophy was 119 (se 4.0) mg/d, compared with 406 (se 1.8) mg/d available from the diet. This study confirms the importance of caecotrophy in rabbit nutrition (15 % of total protein intake).
A Method to Determine 18O Kinetic Isotope Effects in the Hydrolysis of Nucleotide Triphosphates
Du, Xinlin; Ferguson, Kurt; Sprang, Stephen R.
2007-01-01
A method to determine 18O kinetic isotope effects (KIE) in the hydrolysis of GTP is described that is generally applicable to reactions involving other nucleotide triphosphates. Internal competition, wherein the substrate of the reaction is a mixture of 18O-labeled and unlabeled nucleotides, is employed and the change in relative abundance of the two species in the course of the reaction is used to calculate KIE. The nucleotide labeled with 18O at sites of mechanistic interest also contains 13C at all carbon positions, while the 16O-nucleotide is depleted of 13C. The relative abundance of the labeled and unlabeled substrates or products is reflected in the carbon isotope ratio (13C/12C) in GTP or GDP, which is determined by use of a liquid chromatography-coupled isotope ratio mass spectrometer (LC-coupled IRMS). The LC is coupled to the IRMS by an Isolink™ interface (ThermoFinnigan). Carbon isotope ratios can be determined with accuracy and precision greater than 0.04%, and are consistent over an order of magnitude in sample amount. KIE values for Ras/NF1333-catalyzed hydrolysis of [β18O3,13C]GTP were determined by change in the isotope ratio of GTP or GDP or the ratio of the isotope ratio of GDP to that of GTP. KIE values computed in the three ways agree within 0.1%, although the method using the ratio of isotope ratios of GDP and GTP gives superior precision (< 0.1%). A single KIE measurement can be conducted in 25 minutes with less than 5 μg nucleotide reaction product. PMID:17963711
Maloney, James P; Ambruso, Daniel R; Voelkel, Norbert F; Silliman, Christopher C
The occurrence of non-hemolytic transfusion reactions is highest with platelet and plasma administration. Some of these reactions are characterized by endothelial leak, especially transfusion related acute lung injury (TRALI). Elevated concentrations of inflammatory mediators secreted by contaminating leukocytes during blood product storage may contribute to such reactions, but platelet-secreted mediators may also contribute. We hypothesized that platelet storage leads to accumulation of the endothelial permeability mediator vascular endothelial growth factor (VEGF), and that intravascular administration of exogenous VEGF leads to extensive binding to its lung receptors. Single donor, leukocyte-reduced apheresis platelet units were sampled over 5 days of storage. VEGF protein content of the centrifuged supernatant was determined by ELISA, and the potential contribution of VEGF from contaminating leukocytes was quantified. Isolated-perfused rat lungs were used to study the uptake of radiolabeled VEGF administered intravascularly, and the effect of unlabeled VEGF on lung leak. There was a time-dependent release of VEGF into the plasma fraction of the platelet concentrates (62 ± 9 pg/ml on day one, 149 ± 23 pg/ml on day 5; mean ± SEM, p<0.01, n=8) and a contribution by contaminating leukocytes was excluded. Exogenous 125I-VEGF bound avidly and specifically to the lung vasculature, and unlabeled VEGF in the lung perfusate caused vascular leak. Rising concentrations of VEGF occur during storage of single donor platelet concentrates due to platelet secretion or disintegration, but not due to leukocyte contamination. Exogenous VEGF at these concentrations rapidly binds to its receptors in the lung vessels. At higher VEGF concentrations, VEGF causes vascular leak in uninjured lungs. These data provide further evidence that VEGF may contribute to the increased lung permeability seen in TRALI associated with platelet products.
Kennedy, Oran D; Brennan, Orlaith; Mauer, Peter; O'Brien, Fergal J; Rackard, Susan M; Taylor, David; Lee, T Clive
2008-01-01
This study investigates the effect of microdamage on bone quality in osteoporosis using an ovariectomised (OVX) sheep model of osteoporosis. Thirty-four sheep were divided into an OVX group (n=16) and a control group (n=18). Fluorochromes were administered intravenously at 3 monthly intervals after surgery to label bone turnover. After sacrifice, beams were removed from the metatarsal and tested in three-point bending. Following failure, microcracks were identified and quantified in terms of region, location and interaction with osteons. Number of cycles to failure (Nf) was lower in the OVX group relative to controls by approximately 7%. Crack density (CrDn) was higher in the OVX group compared to controls. CrDn was 2.5 and 3.5 times greater in the compressive region compared to tensile in control and OVX bone respectively. Combined results from both groups showed that 91% of cracks remained in interstitial bone, approximately 8% of cracks penetrated unlabelled osteons and less than 1% penetrated into labelled osteons. All cases of labelled osteon penetration occurred in controls. Crack surface density (CrSDn), was 25% higher in the control group compared to OVX. It is known that crack behaviour on meeting microstructural features such as osteons will depend on crack length. We have shown that osteon age also affects crack propagation. Long cracks penetrated unlabelled osteons but not labelled ones. Some cracks in the control group did penetrate labelled osteons. This may be due the fact that control bone is more highly mineralized. CrSDn was increased by 25% in the control group compared to OVX. Further study of these fracture mechanisms will help determine the effect of microdamage on bone quality and how this contributes to bone fragility.
Fomsgaard, Inge S; Spliid, Niels Henrik; Felding, Gitte
2003-01-01
Isoproturon is a herbicide, which was used in Denmark against grass weeds and broad-leaved weeds until 1998. Isoproturon has frequently been detected in ground water monitoring studies. Leaching of isoproturon (N,N-dimethyl-N'-(4-(1-methylethyl)-phenyl)urea) and its metabolites, N'-(4-isopropylphenyl)-N-methylurea and N'-(4-isopropylphenyl)urea was studied in four lysimetres, two of them being replicates from a low-tillage field (lysimeter 3 and 4), the other two being replicates from a normal tillage field (lysimeter 5 and 6). In both cases the soil was a sandy loam soil with 13-14% clay. The lysimetres had a surface area of 0.5 m2 and a depth of 110 cm. Lysimeter 3 and 4 were sprayed with unlabelled isoproturon while lysimeter 5 and 6 was sprayed with a mixture of 14C-labelled and unlabelled isoproturon. The total amount of isoproturon sprayed onto each lysimeter was 63 mg, corresponding to 1.25 kg active ingredient per ha. The lysimeters were sprayed with isoproturon on October 26, 1997. The lysimetres were installed in an outdoor system in Research Centre Flakkebjerg and were thus exposed to normal climatic conditions of the area. A mean of 360 l drainage water were collected from lysimeter 3 and 4 and a mean of 375 litres from lysimeter 5 and 6. Only negligible amounts of isoproturon and its primary metabolites were found in the drainage water samples, and thus no significant difference between the two lysimeter sets was shown. In a total of 82 drainage water samples, evenly distributed between the four lysimetres isoproturon was found in detectable amounts in two samples and N'-(4-isopropylphenyl)urea was found in detectable amounts in two other samples. The detection limit for all the compounds was 0.02 microg/l. 48% and 54% of the added radioactivity were recovered from the upper 10 cm soil layer in lysimeter 5 and 6, respectively, and 17 and 14% from 10-20 cm's depth. By extraction first with an aquatic CaCl2 solution 0.49% of the added radioactivity was extracted from the upper 10 cm layer in lysimeter 5. In the subsequent extraction with acetonitril, 1.19% of the added radioactivity was extracted. In lysimeter 6, upper 10 cm, 0.2% were extracted with water and 0.56% were extracted with acetonitril. Below 10 cm's depth no measurable amounts could be extracted.
The Effects of Parental Advisory Labels on Adolescent Music Preferences.
ERIC Educational Resources Information Center
Christenson, Peter
1992-01-01
Investigates the effect of parental advisory labels (on album covers) on the music taste and preference of adolescent students 12 to 15 years old. Finds that labeled music was liked less than unlabeled music. (SR)
Logic Gate Operation by DNA Translocation through Biological Nanopores.
Yasuga, Hiroki; Kawano, Ryuji; Takinoue, Masahiro; Tsuji, Yutaro; Osaki, Toshihisa; Kamiya, Koki; Miki, Norihisa; Takeuchi, Shoji
2016-01-01
Logical operations using biological molecules, such as DNA computing or programmable diagnosis using DNA, have recently received attention. Challenges remain with respect to the development of such systems, including label-free output detection and the rapidity of operation. Here, we propose integration of biological nanopores with DNA molecules for development of a logical operating system. We configured outputs "1" and "0" as single-stranded DNA (ssDNA) that is or is not translocated through a nanopore; unlabeled DNA was detected electrically. A negative-AND (NAND) operation was successfully conducted within approximately 10 min, which is rapid compared with previous studies using unlabeled DNA. In addition, this operation was executed in a four-droplet network. DNA molecules and associated information were transferred among droplets via biological nanopores. This system would facilitate linking of molecules and electronic interfaces. Thus, it could be applied to molecular robotics, genetic engineering, and even medical diagnosis and treatment.
Lactate is a preferential oxidative energy substrate over glucose for neurons in culture.
Bouzier-Sore, Anne-Karine; Voisin, Pierre; Canioni, Paul; Magistretti, Pierre J; Pellerin, Luc
2003-11-01
The authors investigated concomitant lactate and glucose metabolism in primary neuronal cultures using 13C- and 1H-NMR spectroscopy. Neurons were incubated in a medium containing either [1-13C]glucose and different unlabeled lactate concentrations, or unlabeled glucose and different [3-13C]lactate concentrations. Overall, 13C-NMR spectra of cellular extracts showed that more 13C was incorporated into glutamate when lactate was the enriched substrate. Glutamate 13C-enrichment was also found to be much higher in lactate-labeled than in glucose-labeled conditions. When glucose and lactate concentrations were identical (5.5 mmol/L), relative contributions of glucose and lactate to neuronal oxidative metabolism amounted to 21% and 79%, respectively. Results clearly indicate that when neurons are in the presence of both glucose and lactate, they preferentially use lactate as their main oxidative substrate.
Logic Gate Operation by DNA Translocation through Biological Nanopores
Takinoue, Masahiro; Tsuji, Yutaro; Osaki, Toshihisa; Kamiya, Koki; Miki, Norihisa; Takeuchi, Shoji
2016-01-01
Logical operations using biological molecules, such as DNA computing or programmable diagnosis using DNA, have recently received attention. Challenges remain with respect to the development of such systems, including label-free output detection and the rapidity of operation. Here, we propose integration of biological nanopores with DNA molecules for development of a logical operating system. We configured outputs “1” and “0” as single-stranded DNA (ssDNA) that is or is not translocated through a nanopore; unlabeled DNA was detected electrically. A negative-AND (NAND) operation was successfully conducted within approximately 10 min, which is rapid compared with previous studies using unlabeled DNA. In addition, this operation was executed in a four-droplet network. DNA molecules and associated information were transferred among droplets via biological nanopores. This system would facilitate linking of molecules and electronic interfaces. Thus, it could be applied to molecular robotics, genetic engineering, and even medical diagnosis and treatment. PMID:26890568
Orcutt, Kelly D; Adams, Gregory P; Wu, Anna M; Silva, Matthew D; Harwell, Catey; Hoppin, Jack; Matsumura, Manabu; Kotsuma, Masakatsu; Greenberg, Jonathan; Scott, Andrew M; Beckman, Robert A
2017-10-01
Competitive radiolabeled antibody imaging can determine the unlabeled intact antibody dose that fully blocks target binding but may be confounded by heterogeneous tumor penetration. We evaluated the hypothesis that smaller radiolabeled constructs can be used to more accurately evaluate tumor expressed receptors. The Krogh cylinder distributed model, including bivalent binding and variable intervessel distances, simulated distribution of smaller constructs in the presence of increasing doses of labeled antibody forms. Smaller constructs <25 kDa accessed binding sites more uniformly at large distances from blood vessels compared with larger constructs and intact antibody. These observations were consistent for different affinity and internalization characteristics of constructs. As predicted, a higher dose of unlabeled intact antibody was required to block binding to these distant receptor sites. Small radiolabeled constructs provide more accurate information on total receptor expression in tumors and reveal the need for higher antibody doses for target receptor blockade.
Unsupervised feature learning for autonomous rock image classification
NASA Astrophysics Data System (ADS)
Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond
2017-09-01
Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.
Reverse isotope dilution method for determining benzene and metabolites in tissues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bechtold, W.E.; Sabourin, P.J.; Henderson, R.F.
1988-07-01
A method utilizing reverse isotope dilution for the analysis of benzene and its organic soluble metabolites in tissues of rats and mice is presented. Tissues from rats and mice that had been exposed to radiolabeled benzene were extracted with ethyl acetate containing known, excess quantities of unlabeled benzene and metabolites. Butylated hydroxytoluene was added as an antioxidant. The ethyl acetate extracts were analyzed with semipreparative reversed-phase HPLC. Isolated peaks were collected and analyzed for radioactivity (by liquid scintillation spectrometry) and for mass (by UV absorption). The total amount of each compound present was calculated from the mass dilution of themore » radiolabeled isotope. This method has the advantages of high sensitivity, because of the high specific activity of benzene, and relative stability of the analyses, because of the addition of large amounts of unlabeled carrier analogue.« less
Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.
Chen, Ke; Wang, Shihai
2011-01-01
Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.
Absolute quantitation of intracellular metabolite concentrations by an isotope ratio-based approach
Bennett, Bryson D; Yuan, Jie; Kimball, Elizabeth H; Rabinowitz, Joshua D
2009-01-01
This protocol provides a method for quantitating the intracellular concentrations of endogenous metabolites in cultured cells. The cells are grown in stable isotope-labeled media to near-complete isotopic enrichment and then extracted in organic solvent containing unlabeled internal standards in known concentrations. The ratio of endogenous metabolite to internal standard in the extract is determined using mass spectrometry (MS). The product of this ratio and the unlabeled standard amount equals the amount of endogenous metabolite present in the cells. The cellular concentration of the metabolite can then be calculated on the basis of intracellular volume of the extracted cells. The protocol is exemplified using Escherichia coli and primary human fibroblasts fed uniformly with 13C-labeled carbon sources, with detection of 13C-assimilation by liquid chromatography–tandem MS. It enables absolute quantitation of several dozen metabolites over ~1 week of work. PMID:18714298
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikenishi, K.; Okuda, T.; Nakazato, S.
1984-05-01
A single blastomere containing the ''germ plasm'' of 32-cell stage Xenopus embryos was cultured with (/sup 3/H)thymidine until the control embryos developed to the neurula stage. The explants, showing a spherical mass in which the nuclei of all cells were labeled, were implanted into the prospective place of presumptive primordial germ cells (pPGCs) in the endodermal cell mass of unlabeled host embryos of the neurula stage. Labeled PGCs as well as unlabeled, host PGCs were found in the genital ridges of experimental tadpoles. This indicates that the precursor of germ cells, corresponding to pPGCs in normal embryos of the neurulamore » stage, in the explants migrated to genital ridges just at the right moment to become PGCs, and suggests that the developmental process progressed normally, even in the explants, as far as the differentiation of pPGCs is concerned.« less
Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan
2016-12-02
In the postgenomic era, the number of unreviewed protein sequences is remarkably larger and grows tremendously faster than that of reviewed ones. However, existing methods for protein subchloroplast localization often ignore the information from these unlabeled proteins. This paper proposes a multi-label predictor based on ensemble linear neighborhood propagation (LNP), namely, LNP-Chlo, which leverages hybrid sequence-based feature information from both labeled and unlabeled proteins for predicting localization of both single- and multi-label chloroplast proteins. Experimental results on a stringent benchmark dataset and a novel independent dataset suggest that LNP-Chlo performs at least 6% (absolute) better than state-of-the-art predictors. This paper also demonstrates that ensemble LNP significantly outperforms LNP based on individual features. For readers' convenience, the online Web server LNP-Chlo is freely available at http://bioinfo.eie.polyu.edu.hk/LNPChloServer/ .
Sensitive detection of unlabeled oligonucleotides using a paired surface plasma waves biosensor.
Li, Ying-Chang; Chiou, Chiuan-Chian; Luo, Ji-Dung; Chen, Wei-Ju; Su, Li-Chen; Chang, Ying-Feng; Chang, Yu-Sun; Lai, Chao-Sung; Lee, Cheng-Chung; Chou, Chien
2012-05-15
Detection of unlabeled oligonucleotides using surface plasmon resonance (SPR) is difficult because of the oligonucleotides' relatively lower molecular weight compared with proteins. In this paper, we describe a method for detecting unlabeled oligonucleotides at low concentration using a paired surface plasma waves biosensor (PSPWB). The biosensor uses a sensor chip with an immobilized probe to detect a target oligonucleotide via sequence-specific hybridization. PSPWB measures the demodulated amplitude of the heterodyne signal in real time. In the meantime, the ratio of the amplitudes between the detected output signal and reference can reduce the excess noise from the laser intensity fluctuation. Also, the common-path propagation of p and s waves cancels the common phase noise induced by temperature variation. Thus, a high signal-to-noise ratio (SNR) of the heterodyne signal is detected. The sequence specificity of oligonucleotide hybridization ensures that the platform is precisely discriminating between target and non-target oligonucleotides. Under optimized experimental conditions, the detected heterodyne signal increases linearly with the logarithm of the concentration of target oligonucleotide over the range 0.5-500 pM. The detection limit is 0.5 pM in this experiment. In addition, the non-target oligonucleotide at concentrations of 10 pM and 10nM generated signals only slightly higher than background, indicating the high selectivity and specificity of this method. Different length of perfectly matched oligonucleotide targets at 10-mer, 15-mer and 20-mer were identified at the concentration of 150 pM. Copyright © 2012 Elsevier B.V. All rights reserved.
Use of low-altitude aerial photography to identify submersed aquatic macrophytes
Schloesser, Donald W.; Manny, Bruce A.; Brown, Charles L.; Jaworski, Eugene
1987-01-01
The feasibility of using low-altitude aerial photography to identify beds of submersed macrophytes is demonstrated. True color aerial photos and collateral ground survey information for submersed aquatic macrophyte beds at 10 sites in the St.Clair-Detroit River system were obtained in September 1978. Using the photos and collateral ground survey information, a dichotomous key was developed for the identification of six classes - beds of five genera of macrophytes and one substrate type. A test was prepared to determine how accurately photo interpreters could identify the six classes. The test required an interpreter to examine an unlabeled, outlined area on photographs and identify it using the key. Six interpreters were tested. One pair of interpreters was trained in the interpretation of a variety of aerial photos, a second pair had field experience in the collection and identification of submersed macrophytes in the river system, and a third pair had neither training in the interpretation of aerial photos nor field experience. The criteria that we developed were applied equally well by the interpretors, regardless of their training or experience. Overall accuracy (i.e., omission errors) of all six classes combined was 68% correct, whereas, overall accuracy of individual classes ranged from 50 to 100% correct. Mapping accuracy (i.e. omission and commission errors) of individual classes ranged from 36 to 75%. Although the key developed for this study has only limited application outside the context of the data and sites examined in this study, it is concluded that low-altitude aerial photography, together with limited amounts of collateral ground survey information, can be used to economically identify beds of submersed macrophytes in the St. Clair-Detroit River system and other similar water bodies.
Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice
2014-04-01
Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Clustering for unsupervised fault diagnosis in nuclear turbine shut-down transients
NASA Astrophysics Data System (ADS)
Baraldi, Piero; Di Maio, Francesco; Rigamonti, Marco; Zio, Enrico; Seraoui, Redouane
2015-06-01
Empirical methods for fault diagnosis usually entail a process of supervised training based on a set of examples of signal evolutions "labeled" with the corresponding, known classes of fault. However, in practice, the signals collected during plant operation may be, very often, "unlabeled", i.e., the information on the corresponding type of occurred fault is not available. To cope with this practical situation, in this paper we develop a methodology for the identification of transient signals showing similar characteristics, under the conjecture that operational/faulty transient conditions of the same type lead to similar behavior in the measured signals evolution. The methodology is founded on a feature extraction procedure, which feeds a spectral clustering technique, embedding the unsupervised fuzzy C-means (FCM) algorithm, which evaluates the functional similarity among the different operational/faulty transients. A procedure for validating the plausibility of the obtained clusters is also propounded based on physical considerations. The methodology is applied to a real industrial case, on the basis of 148 shut-down transients of a Nuclear Power Plant (NPP) steam turbine.
Ask-the-expert: Active Learning Based Knowledge Discovery Using the Expert
NASA Technical Reports Server (NTRS)
Das, Kamalika; Avrekh, Ilya; Matthews, Bryan; Sharma, Manali; Oza, Nikunj
2017-01-01
Often the manual review of large data sets, either for purposes of labeling unlabeled instances or for classifying meaningful results from uninteresting (but statistically significant) ones is extremely resource intensive, especially in terms of subject matter expert (SME) time. Use of active learning has been shown to diminish this review time significantly. However, since active learning is an iterative process of learning a classifier based on a small number of SME-provided labels at each iteration, the lack of an enabling tool can hinder the process of adoption of these technologies in real-life, in spite of their labor-saving potential. In this demo we present ASK-the-Expert, an interactive tool that allows SMEs to review instances from a data set and provide labels within a single framework. ASK-the-Expert is powered by an active learning algorithm for training a classifier in the backend. We demonstrate this system in the context of an aviation safety application, but the tool can be adopted to work as a simple review and labeling tool as well, without the use of active learning.
Ask-the-Expert: Active Learning Based Knowledge Discovery Using the Expert
NASA Technical Reports Server (NTRS)
Das, Kamalika
2017-01-01
Often the manual review of large data sets, either for purposes of labeling unlabeled instances or for classifying meaningful results from uninteresting (but statistically significant) ones is extremely resource intensive, especially in terms of subject matter expert (SME) time. Use of active learning has been shown to diminish this review time significantly. However, since active learning is an iterative process of learning a classifier based on a small number of SME-provided labels at each iteration, the lack of an enabling tool can hinder the process of adoption of these technologies in real-life, in spite of their labor-saving potential. In this demo we present ASK-the-Expert, an interactive tool that allows SMEs to review instances from a data set and provide labels within a single framework. ASK-the-Expert is powered by an active learning algorithm for training a classifier in the back end. We demonstrate this system in the context of an aviation safety application, but the tool can be adopted to work as a simple review and labeling tool as well, without the use of active learning.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
Direct Competitive Enzyme-Linked Immunosorbent Assay (ELISA).
Kohl, Thomas O; Ascoli, Carl A
2017-07-05
The competitive enzyme-linked immunosorbent assay (ELISA) (cELISA; also called an inhibition ELISA) is designed so that purified antigen competes with antigen in the test sample for binding to an antibody that has been immobilized in microtiter plate wells. The same concept works if the immobilized molecule is antigen and the competing molecules are purified labeled antibody versus antibody in a test sample. Direct cELISAs incorporate labeled antigen or antibody, whereas indirect assay configurations use reporter-labeled secondary antibodies. The cELISA is very useful for determining the concentration of small-molecule antigens in complex sample mixtures. In the direct cELISA, antigen-specific capture antibody is adsorbed onto the microtiter plate before incubation with either known standards or unknown test samples. Enzyme-linked antigen (i.e., labeled antigen) is also added, which can bind to the capture antibody only when the antibody's binding site is not occupied by either the antigen standard or antigen in the test samples. Unbound labeled and unlabeled antigens are washed away and substrate is added. The amount of antigen in the standard or the test sample determines the amount of reporter-labeled antigen bound to antibody, yielding a signal that is inversely proportional to antigen concentration within the sample. Thus, the higher the antigen concentration in the test sample, the less labeled antigen is bound to the capture antibody, and hence the weaker is the resultant signal. © 2017 Cold Spring Harbor Laboratory Press.
Microarray slide hybridization using fluorescently labeled cDNA.
Ares, Manuel
2014-01-01
Microarray hybridization is used to determine the amount and genomic origins of RNA molecules in an experimental sample. Unlabeled probe sequences for each gene or gene region are printed in an array on the surface of a slide, and fluorescently labeled cDNA derived from the RNA target is hybridized to it. This protocol describes a blocking and hybridization protocol for microarray slides. The blocking step is particular to the chemistry of "CodeLink" slides, but it serves to remind us that almost every kind of microarray has a treatment step that occurs after printing but before hybridization. We recommend making sure of the precise treatment necessary for the particular chemistry used in the slides to be hybridized because the attachment chemistries differ significantly. Hybridization is similar to northern or Southern blots, but on a much smaller scale.
The iodide space in rabbit brain
Ahmed, Nawal; Van Harreveld, A.
1969-01-01
1. The iodide space in rabbit brain varies greatly depending on the conditions under which it is determined. 2. When 131I- only is used the iodide space 4 hr after administration of the marker is of the order of 2%. The iodide content of the cerebrospinal fluid (c.s.f.) is about 1% of that of the serum. 3. Depression of the active iodide transport by perchlorate increases the space to 8·2% and the iodide content of the c.s.f. to 26% of that of the serum. 4. The active iodide transport can also be depressed by saturation with unlabelled iodide. Up to a serum iodide concentration of 5 mM the space determined after 5 hr remained constant at 2·7%. The iodide space grew when the serum iodide content was enhanced from 5 to 20 mM, to become constant at a value of 10·6% on further increase of the serum iodide (up to 50 mM). The iodide content of the c.s.f. increased in a similar manner as the space with the iodide concentration of the serum to about 1/3 of the serum concentration. The iodide space of the muscle was independent of the plasma iodide content. 5. From 4 to 8 hr after administration of 131I- alone or with unlabelled iodide (to a serum concentration of 15 mM) the iodide space remained relatively constant. 6. When 131I- was administered in the fluid with which the ventricles were perfused an iodide space of about 7% was attained after about 5 hr. 7. In experiments in which 131I- was administered intravenously and the sink action of the c.s.f. was eliminated by perfusion of the ventricles with a perfusate containing as much 131I- as the plasma, the iodide space was 10·2%. When in addition active iodide transport was depressed by perchlorate the space increased to 16·8%. 8. Intravenous administration of labelled and unlabelled iodide (to a serum concentration of 20-40 mM) and ventricle perfusion with the same concentration of 131I- and unlabelled iodide as in the plasma yielded an iodide space of 20·8%. In similar experiments the iodide concentration of the perfusate was so adjusted that after 5 hr perfusion its iodide content hardly changed during the passage through the ventricles. Under these conditions the iodide concentration of the extracellular and perfusion fluids can be considered to be near equal. The iodide space computed on the basis of the iodide content of the outflowing fluid was 22·5%. 9. The large iodide space could be equated with the extracellular space if the iodide remained extracellular. This seems to be the case in the muscle where the iodide space is similar to the inulin space. 10. The large effects on the iodide space of perchlorate and saturation with unlabelled iodide in experiments in which the marker was administered intravenously and in the perfusate (7 and 8) suggests the presence of an active iodide transport from the brain extracellular fluid into the blood over the blood—brain barrier. PMID:4310942
A machine learning pipeline for automated registration and classification of 3D lidar data
NASA Astrophysics Data System (ADS)
Rajagopal, Abhejit; Chellappan, Karthik; Chandrasekaran, Shivkumar; Brown, Andrew P.
2017-05-01
Despite the large availability of geospatial data, registration and exploitation of these datasets remains a persis- tent challenge in geoinformatics. Popular signal processing and machine learning algorithms, such as non-linear SVMs and neural networks, rely on well-formatted input models as well as reliable output labels, which are not always immediately available. In this paper we outline a pipeline for gathering, registering, and classifying initially unlabeled wide-area geospatial data. As an illustrative example, we demonstrate the training and test- ing of a convolutional neural network to recognize 3D models in the OGRIP 2007 LiDAR dataset using fuzzy labels derived from OpenStreetMap as well as other datasets available on OpenTopography.org. When auxiliary label information is required, various text and natural language processing filters are used to extract and cluster keywords useful for identifying potential target classes. A subset of these keywords are subsequently used to form multi-class labels, with no assumption of independence. Finally, we employ class-dependent geometry extraction routines to identify candidates from both training and testing datasets. Our regression networks are able to identify the presence of 6 structural classes, including roads, walls, and buildings, in volumes as big as 8000 m3 in as little as 1.2 seconds on a commodity 4-core Intel CPU. The presented framework is neither dataset nor sensor-modality limited due to the registration process, and is capable of multi-sensor data-fusion.
Huang, Yue; Zheng, Han; Liu, Chi; Ding, Xinghao; Rohde, Gustavo K
2017-11-01
Epithelium-stroma classification is a necessary preprocessing step in histopathological image analysis. Current deep learning based recognition methods for histology data require collection of large volumes of labeled data in order to train a new neural network when there are changes to the image acquisition procedure. However, it is extremely expensive for pathologists to manually label sufficient volumes of data for each pathology study in a professional manner, which results in limitations in real-world applications. A very simple but effective deep learning method, that introduces the concept of unsupervised domain adaptation to a simple convolutional neural network (CNN), has been proposed in this paper. Inspired by transfer learning, our paper assumes that the training data and testing data follow different distributions, and there is an adaptation operation to more accurately estimate the kernels in CNN in feature extraction, in order to enhance performance by transferring knowledge from labeled data in source domain to unlabeled data in target domain. The model has been evaluated using three independent public epithelium-stroma datasets by cross-dataset validations. The experimental results demonstrate that for epithelium-stroma classification, the proposed framework outperforms the state-of-the-art deep neural network model, and it also achieves better performance than other existing deep domain adaptation methods. The proposed model can be considered to be a better option for real-world applications in histopathological image analysis, since there is no longer a requirement for large-scale labeled data in each specified domain.
Discovery of Deep Structure from Unlabeled Data
2014-11-01
GPU processors . To evaluate the unsupervised learning component of the algorithms (which has become of less importance in the era of “big data...representations to those in biological visual, auditory, and somatosensory cortex ; and ran numerous control experiments investigating the impact of
USDA-ARS?s Scientific Manuscript database
A methodology is presented to characterize complex protein assembly pathways by fluorescence correlation spectroscopy. We have derived the total autocorrelation function describing the behavior of mixtures of labeled and unlabeled protein under equilibrium conditions. Our modeling approach allows us...
Microfluidic labeling of biomolecules with radiometals for use in nuclear medicine.
Wheeler, Tobias D; Zeng, Dexing; Desai, Amit V; Önal, Birce; Reichert, David E; Kenis, Paul J A
2010-12-21
Radiometal-based radiopharmaceuticals, used as imaging and therapeutic agents in nuclear medicine, consist of a radiometal that is bound to a targeting biomolecule (BM) using a bifunctional chelator (BFC). Conventional, macroscale radiolabeling methods use an excess of the BFC-BM conjugate (ligand) to achieve high radiolabeling yields. Subsequently, to achieve maximal specific activity (minimal amount of unlabeled ligand), extensive chromatographic purification is required to remove unlabeled ligand, often resulting in longer synthesis times and loss of imaging sensitivity due to radioactive decay. Here we describe a microreactor that overcomes the above issues through integration of efficient mixing and heating strategies while working with small volumes of concentrated reagents. As a model reaction, we radiolabel 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) conjugated to the peptide cyclo(Arg-Gly-Asp-DPhe-Lys) with (64)Cu(2+). We show that the microreactor (made from polydimethylsiloxane and glass) can withstand 260 mCi of activity over 720 hours and retains only minimal amounts of (64)Cu(2+) (<5%) upon repeated use. A direct comparison between the radiolabeling yields obtained using the microreactor and conventional radiolabeling methods shows that improved mixing and heat transfer in the microreactor leads to higher yields for identical reaction conditions. Most importantly, by using small volumes (~10 µL) of concentrated solutions of reagents (>50 µM), yields of over 90% can be achieved in the microreactor when using a 1:1 stoichiometry of radiometal to BFC-BM. These high yields eliminate the need for use of excess amounts of often precious BM and obviate the need for a chromatographic purification process to remove unlabeled ligand. The results reported here demonstrate the potential of microreactor technology to improve the production of patient-tailored doses of radiometal-based radiopharmaceuticals in the clinic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ledee, Dolena; Smith, Lincoln; Bruce, Margaret
Pressure overload cardiac hypertrophy alters substrate metabolism. Prior work showed that myocardial inactivation of c-Myc (Myc) attenuated hypertrophy and decreased expression of metabolic genes after aortic constriction. Accordingly, we hypothesize that Myc regulates substrate preferences for the citric acid cycle during pressure overload hypertrophy from transverse aortic constriction (TAC) and that these metabolic changes impact cardiac function and growth. To test this hypothesis, we subjected mice with cardiac specific, inducible Myc inactivation (MycKO-TAC) and non-transgenic littermates (Cont-TAC) to transverse aortic constriction (TAC; n=7/group). A separate group underwent sham surgery (Sham, n=5). After two weeks, function was measured in isolated workingmore » hearts along with substrate fractional contributions to the citric acid cycle by using perfusate with 13C labeled mixed fatty acids, lactate, ketone bodies and unlabeled glucose and insulin. Cardiac function was similar between groups after TAC although +dP/dT and -dP/dT trended towards improvement in MycKO-TAC versus Cont-TAC. Compared to Sham, Cont-TAC had increased free fatty acid fractional contribution with a concurrent decrease in unlabeled (predominately glucose) contribution. The changes in free fatty acid and unlabeled fractional contributions were abrogated by Myc inactivation during TAC (MycKO-TAC). Additionally, protein posttranslational modification by O-GlcNAc was significantly greater in Cont-TAC versus both Sham and MycKO-TAC. Lastly, Myc alters substrate preferences for the citric acid cycle during early pressure overload hypertrophy without negatively affecting cardiac function. Myc also affects protein posttranslational modifications by O-GlcNAc during hypertrophy.« less
Ledee, Dolena; Smith, Lincoln; Bruce, Margaret; ...
2015-08-12
Pressure overload cardiac hypertrophy alters substrate metabolism. Prior work showed that myocardial inactivation of c-Myc (Myc) attenuated hypertrophy and decreased expression of metabolic genes after aortic constriction. Accordingly, we hypothesize that Myc regulates substrate preferences for the citric acid cycle during pressure overload hypertrophy from transverse aortic constriction (TAC) and that these metabolic changes impact cardiac function and growth. To test this hypothesis, we subjected mice with cardiac specific, inducible Myc inactivation (MycKO-TAC) and non-transgenic littermates (Cont-TAC) to transverse aortic constriction (TAC; n=7/group). A separate group underwent sham surgery (Sham, n=5). After two weeks, function was measured in isolated workingmore » hearts along with substrate fractional contributions to the citric acid cycle by using perfusate with 13C labeled mixed fatty acids, lactate, ketone bodies and unlabeled glucose and insulin. Cardiac function was similar between groups after TAC although +dP/dT and -dP/dT trended towards improvement in MycKO-TAC versus Cont-TAC. Compared to Sham, Cont-TAC had increased free fatty acid fractional contribution with a concurrent decrease in unlabeled (predominately glucose) contribution. The changes in free fatty acid and unlabeled fractional contributions were abrogated by Myc inactivation during TAC (MycKO-TAC). Additionally, protein posttranslational modification by O-GlcNAc was significantly greater in Cont-TAC versus both Sham and MycKO-TAC. Lastly, Myc alters substrate preferences for the citric acid cycle during early pressure overload hypertrophy without negatively affecting cardiac function. Myc also affects protein posttranslational modifications by O-GlcNAc during hypertrophy.« less
Vranish, James N.; Russell, William K.; Yu, Lusa E.; ...
2014-12-05
Iron–sulfur (Fe–S) clusters are protein cofactors that are constructed and delivered to target proteins by elaborate biosynthetic machinery. Mechanistic insights into these processes have been limited by the lack of sensitive probes for tracking Fe–S cluster synthesis and transfer reactions. Here we present fusion protein- and intein-based fluorescent labeling strategies that can probe Fe–S cluster binding. The fluorescence is sensitive to different cluster types ([2Fe–2S] and [4Fe–4S] clusters), ligand environments ([2Fe–2S] clusters on Rieske, ferredoxin (Fdx), and glutaredoxin), and cluster oxidation states. The power of this approach is highlighted with an extreme example in which the kinetics of Fe–S clustermore » transfer reactions are monitored between two Fdx molecules that have identical Fe–S spectroscopic properties. This exchange reaction between labeled and unlabeled Fdx is catalyzed by dithiothreitol (DTT), a result that was confirmed by mass spectrometry. DTT likely functions in a ligand substitution reaction that generates a [2Fe–2S]–DTT species, which can transfer the cluster to either labeled or unlabeled Fdx. The ability to monitor this challenging cluster exchange reaction indicates that real-time Fe–S cluster incorporation can be tracked for a specific labeled protein in multicomponent assays that include several unlabeled Fe–S binding proteins or other chromophores. Such advanced kinetic experiments are required to untangle the intricate networks of transfer pathways and the factors affecting flux through branch points. High sensitivity and suitability with high-throughput methodology are additional benefits of this approach. Lastly, we anticipate that this cluster detection methodology will transform the study of Fe–S cluster pathways and potentially other metal cofactor biosynthetic pathways.« less
Su, Kyung-Min; Hairston, W David; Robbins, Kay
2018-01-01
In controlled laboratory EEG experiments, researchers carefully mark events and analyze subject responses time-locked to these events. Unfortunately, such markers may not be available or may come with poor timing resolution for experiments conducted in less-controlled naturalistic environments. We present an integrated event-identification method for identifying particular responses that occur in unlabeled continuously recorded EEG signals based on information from recordings of other subjects potentially performing related tasks. We introduce the idea of timing slack and timing-tolerant performance measures to deal with jitter inherent in such non-time-locked systems. We have developed an implementation available as an open-source MATLAB toolbox (http://github.com/VisLab/EEG-Annotate) and have made test data available in a separate data note. We applied the method to identify visual presentation events (both target and non-target) in data from an unlabeled subject using labeled data from other subjects with good sensitivity and specificity. The method also identified actual visual presentation events in the data that were not previously marked in the experiment. Although the method uses traditional classifiers for initial stages, the problem of identifying events based on the presence of stereotypical EEG responses is the converse of the traditional stimulus-response paradigm and has not been addressed in its current form. In addition to identifying potential events in unlabeled or incompletely labeled EEG, these methods also allow researchers to investigate whether particular stereotypical neural responses are present in other circumstances. Timing-tolerance has the added benefit of accommodating inter- and intra- subject timing variations. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Smith, Gordon I; Patterson, Bruce W; Klein, Seth J; Mittendorfer, Bettina
2015-09-15
Accurate measurement of muscle protein turnover is critical for understanding the physiological processes underlying muscle atrophy and hypertrophy. Several mathematical approaches, used in conjunction with a tracer amino acid infusion, have been described to derive protein synthesis and breakdown rates from a two-pool (artery-vein) model. Despite apparently common underlying principles, these approaches differ significantly (some seem to not take into account arterio-venous shunting of amino acids, which comprises ∼80-90% of amino acids appearing in the vein) and most do not specify how tracer enrichment (i.e. mole percent excess (MPE) or tracer-to-tracee ratio (TTR)) and amino acid concentration (i.e. unlabelled only or total labelled plus unlabelled) should be expressed, which could have a significant impact on the outcome when using stable isotope labelled tracers. We developed equations that avoid these uncertainties and used them to calculate leg phenylalanine (Phe) kinetics in subjects who received a [(2) H5 ]Phe tracer infusion during postabsorptive conditions and during a hyperinsulinaemic-euglycaemic clamp with concomitant protein ingestion. These results were compared with those obtained by analysing the same data with previously reported equations. Only some of them computed the results correctly when used with MPE as the enrichment measure and total (tracer+tracee) Phe concentrations; errors up to several-fold in magnitude were noted when the same approaches were used in conjunction with TTR and/or unlabelled concentration only, or when using the other approaches (irrespective of how concentration and enrichment are expressed). Our newly developed equations should facilitate accurate calculation of protein synthesis and breakdown rates. © 2015 The Authors. The Journal of Physiology © 2015 The Physiological Society.
The Effect of Asymmetrical Sample Training on Retention Functions for Hedonic Samples in Rats
ERIC Educational Resources Information Center
Simmons, Sabrina; Santi, Angelo
2012-01-01
Rats were trained in a symbolic delayed matching-to-sample task to discriminate sample stimuli that consisted of the presence of food or the absence of food. Asymmetrical sample training was provided in which one group was initially trained with only the food sample and the other group was initially trained with only the no-food sample. In…
B, Vinoth; Lai, Xin-Ji; Lin, Yu-Chih; Tu, Han-Yen; Cheng, Chau-Jern
2018-04-13
Digital holographic microtomography is a promising technique for three-dimensional (3D) measurement of the refractive index (RI) profiles of biological specimens. Measurement of the RI distribution of a free-floating single living cell with an isotropic superresolution had not previously been accomplished. To the best of our knowledge, this is the first study focusing on the development of an integrated dual-tomographic (IDT) imaging system for RI measurement of an unlabelled free-floating single living cell with an isotropic superresolution by combining the spatial frequencies of full-angle specimen rotation with those of beam rotation. A novel 'UFO' (unidentified flying object) like shaped coherent transfer function is obtained. The IDT imaging system does not require any complex image-processing algorithm for 3D reconstruction. The working principle was successfully demonstrated and a 3D RI profile of a single living cell, Candida rugosa, was obtained with an isotropic superresolution. This technology is expected to set a benchmark for free-floating single live sample measurements without labeling or any special sample preparations for the experiments.
Malik, Nikita; Kumar, Ashutosh
2016-09-01
NMR resonance assignment of intrinsically disordered proteins poses a challenge because of the limited dispersion of amide proton chemical shifts. This becomes even more complex with the increase in the size of the system. Residue specific selective labeling/unlabeling experiments have been used to resolve the overlap, but require multiple sample preparations. Here, we demonstrate an assignment strategy requiring only a single sample of uniformly labeled (13)C,(15)N-protein. We have used a combinatorial approach, involving 3D-HNN, CC(CO)NH and 2D-MUSIC, which allowed us to assign a denatured centromeric protein Cse4 of 229 residues. Further, we show that even the less sensitive experiments, when used in an efficient manner can lead to the complete assignment of a complex system without the use of specialized probes in a relatively short time frame. The assignment of the amino acids discloses the presence of local structural propensities even in the denatured state accompanied by restricted motion in certain regions that provides insights into the early folding events of the protein.
Galmozzi, E; Facchetti, F; Degasperi, E; Aghemo, A; Lampertico, P
2013-02-01
Recently, genome-wide association studies (GWAS) in patients with chronic hepatitis C virus (HCV) infection have identified two functional single nucleotide polymorphisms (SNPs) in the inosine triphosphatase (ITPA) gene, that are associated strongly and independently with hemolytic anemia in patients exposed to pegylated-interferon (Peg-IFN) plus ribavirin (RBV) combined therapy. Here has been developed a simplified allele discrimination polymerase chain reaction (PCR) assay named allelic inhibition of displacement activity (AIDA) for evaluation of ITPA polymorphisms. AIDA system relies on three unlabeled primers only, two outer common primers and one inner primer with allele-specific 3' terminus mismatch. DNA samples from 192 patients with chronic HCV infection were used to validate the AIDA system and results were compared with the gold standard TaqMan(®) SNP genotyping assay. Concordant data were obtained for all samples, granting for high specificity of the method. In conclusion, AIDA is a practical one-tube method to reproducibly and to assess accurately rs7270101 and rs1127354 ITPA SNPs. Copyright © 2012 Elsevier B.V. All rights reserved.
Analysis of the NMI01 marker for a population database of cannabis seeds.
Shirley, Nicholas; Allgeier, Lindsay; Lanier, Tommy; Coyle, Heather Miller
2013-01-01
We have analyzed the distribution of genotypes at a single hexanucleotide short tandem repeat (STR) locus in a Cannabis sativa seed database along with seed-packaging information. This STR locus is defined by the polymerase chain reaction amplification primers CS1F and CS1R and is referred to as NMI01 (for National Marijuana Initiative) in our study. The population database consists of seed seizures of two categories: seed samples from labeled and unlabeled packages regarding seed bank source. Of a population database of 93 processed seeds including 12 labeled Cannabis varieties, the observed genotypes generated from single seeds exhibited between one and three peaks (potentially six alleles if in homozygous state). The total number of observed genotypes was 54 making this marker highly specific and highly individualizing even among seeds of common lineage. Cluster analysis associated many but not all of the handwritten labeled seed varieties tested to date as well as the National Park seizure to our known reference database containing Mr. Nice Seedbank and Sensi Seeds commercially packaged reference samples. © 2012 American Academy of Forensic Sciences.
Domain-Invariant Partial-Least-Squares Regression.
Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne
2018-05-11
Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.
St. Petersburg Coastal and Marine Science Center's Core Archive Portal
Reich, Chris; Streubert, Matt; Dwyer, Brendan; Godbout, Meg; Muslic, Adis; Umberger, Dan
2012-01-01
This Web site contains information on rock cores archived at the U.S. Geological Survey (USGS) St. Petersburg Coastal and Marine Science Center (SPCMSC). Archived cores consist of 3- to 4-inch-diameter coral cores, 1- to 2-inch-diameter rock cores, and a few unlabeled loose coral and rock samples. This document - and specifically the archive Web site portal - is intended to be a 'living' document that will be updated continually as additional cores are collected and archived. This document may also contain future references and links to a catalog of sediment cores. Sediment cores will include vibracores, pushcores, and other loose sediment samples collected for research purposes. This document will: (1) serve as a database for locating core material currently archived at the USGS SPCMSC facility; (2) provide a protocol for entry of new core material into the archive system; and, (3) set the procedures necessary for checking out core material for scientific purposes. Core material may be loaned to other governmental agencies, academia, or non-governmental organizations at the discretion of the USGS SPCMSC curator.
Can Semi-Supervised Learning Explain Incorrect Beliefs about Categories?
ERIC Educational Resources Information Center
Kalish, Charles W.; Rogers, Timothy T.; Lang, Jonathan; Zhu, Xiaojin
2011-01-01
Three experiments with 88 college-aged participants explored how unlabeled experiences--learning episodes in which people encounter objects without information about their category membership--influence beliefs about category structure. Participants performed a simple one-dimensional categorization task in a brief supervised learning phase, then…
Systematic Thinking Fostered by Illustrations in Scientific Text.
ERIC Educational Resources Information Center
Mayer, Richard E.
1989-01-01
In two experiments, a total of 78 female college students, who were novices about automobile mechanics, read technical passages about vehicle braking systems with and without illustrations that were labeled or unlabeled. Results indicate that illustrations help readers focus attention and form mental models. (SLD)
Measurement of deuterium-labeled phylloquinone in plasma by LC-APCI-MS
USDA-ARS?s Scientific Manuscript database
Deuterium-labeled vegetables were fed to humans for the measurement of both unlabeled and deuterium-labeled phylloquinone in plasma. We developed a technique to determine the quantities of these compounds using liquid chromatography/mass spectrometry with atmospheric pressure chemical ionization (LC...
In vitro metabolism of radiolabeled carbohydrates by protective cecal anaerobic bacteria.
Hume, M E; Beier, R C; Hinton, A; Scanlan, C M; Corrier, D E; Peterson, D V; DeLoach, J R
1993-12-01
Cecal anaerobic bacteria from adult broilers were cultured in media containing .25% glucose or .25% lactose. Media also contained either [14C]-labeled lactose, glucose, galactose, or lactic acid as metabolic tracers. Cultures were analyzed at 4, 8, and 12 h for pH, radiolabeled and unlabeled volatile fatty acids, and lactic acid. The pH values of cultures containing .25% lactose were significantly (P < .05) higher than the pH values of cultures containing .25% glucose. Lactose cultures reached their lowest pH more slowly than glucose cultures. Concentrations of unlabeled volatile fatty acids increased and lactic acid decreased during incubation of the cultures. Radiolabeled sugars and lactic acid were more readily metabolized to volatile fatty acids in media containing lactose than in media containing glucose. The preferred metabolism of [14C]substrates, independent of media carbohydrate, was in the following order: lactic acid > galactose, lactose > glucose. The volatile fatty acids in which radiolabel was most concentrated were acetic acid, propionic acid, or butyric acid.
Schulze, Philipp; Ludwig, Martin; Kohler, Frank; Belder, Detlev
2005-03-01
Deep UV fluorescence detection at 266-nm excitation wavelength has been realized for sensitive detection in microchip electrophoresis. For this purpose, an epifluorescence setup was developed enabling the coupling of a deep UV laser into a commercial fluorescence microscope. Deep UV laser excitation utilizing a frequency quadrupled pulsed laser operating at 266 nm shows an impressive performance for native fluorescence detection of various compounds in fused-silica microfluidic devices. Aromatic low molecular weight compounds such as serotonin, propranolol, a diol, and tryptophan could be detected at low-micromolar concentrations. Deep UV fluorescence detection was also successfully employed for the detection of unlabeled basic proteins. For this purpose, fused-silica chips dynamically coated with hydroxypropylmethyl cellulose were employed to suppress analyte adsorption. Utilizing fused-silica chips permanently coated with poly(vinyl alcohol), it was also possible to separate and detect egg white chicken proteins. These data show that deep UV fluorescence detection significantly widens the application range of fluorescence detection in chip-based analysis techniques.
Object-graphs for context-aware visual category discovery.
Lee, Yong Jae; Grauman, Kristen
2012-02-01
How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.
A novel heterogeneous training sample selection method on space-time adaptive processing
NASA Astrophysics Data System (ADS)
Wang, Qiang; Zhang, Yongshun; Guo, Yiduo
2018-04-01
The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.
Generating virtual training samples for sparse representation of face images and face recognition
NASA Astrophysics Data System (ADS)
Du, Yong; Wang, Yu
2016-03-01
There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.
Time Series Proteome Profiling
Formolo, Catherine A.; Mintz, Michelle; Takanohashi, Asako; Brown, Kristy J.; Vanderver, Adeline; Halligan, Brian; Hathout, Yetrib
2014-01-01
This chapter provides a detailed description of a method used to study temporal changes in the endoplasmic reticulum (ER) proteome of fibroblast cells exposed to ER stress agents (tunicamycin and thapsigargin). Differential stable isotope labeling by amino acids in cell culture (SILAC) is used in combination with crude ER fractionation, SDS–PAGE and LC-MS/MS to define altered protein expression in tunicamycin or thapsigargin treated cells versus untreated cells. Treated and untreated cells are harvested at different time points, mixed at a 1:1 ratio and processed for ER fractionation. Samples containing labeled and unlabeled proteins are separated by SDS–PAGE, bands are digested with trypsin and the resulting peptides analyzed by LC-MS/MS. Proteins are identified using Bioworks software and the Swiss-Prot data-base, whereas ratios of protein expression between treated and untreated cells are quantified using ZoomQuant software. Data visualization is facilitated by GeneSpring software. proteomics PMID:21082445
Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F
2010-06-01
Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.
Fluorescence polarization immunoassays for rapid, accurate and sensitive determination of mycotoxins
USDA-ARS?s Scientific Manuscript database
Fluorescence polarization immunoassay (FPIA) is a type of homogeneous assay. For low molecular weight antigens, such as mycotoxins, it is based on the competition between an unlabeled antigen and its fluorescent-labeled derivative (tracer) for an antigen-specific antibody. The antigen content is det...
27 CFR 20.177 - Encased containers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Encased containers. 20.177... Users of Specially Denatured Spirits Operations by Dealers § 20.177 Encased containers. (a) A dealer may package specially denatured spirits in unlabeled containers which are completely encased in wood...
27 CFR 20.145 - Encased containers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Encased containers. 20.145... Denatured Alcohol § 20.145 Encased containers. Completely denatured alcohol may be packaged by distributors in unlabeled containers which are completely encased in wood, fiberboard, or similar material so that...
USDA-ARS?s Scientific Manuscript database
Monosaccharide C-glycoside ketones have been prepared by aqueous-based Knoevenagel condensation of isotopically-labeled and unlabeled aldoses with cyclic diketones, 5,5-dimethyl-1,3-cyclohexanedione (dimedone) and 1,3-cyclohexanedione (1,3-CHD). The reaction products and their corresponding acetyla...
The synthesis of tritium-labelled human corticotropin of high specific radioactivity.
Brundish, D E; Wade, R
1977-01-01
Human [[3,5-3H2]Tyr23]corticotropin-(1-39)-nonatriacontapeptide of specific radioactivity 25.2 Ci/mmol, identical with unlabelled human corticotropin by several criteria, was prepared via the fully protected di-iodotyrosine compound. The latter was synthesized by classical procedures. PMID:196594
Learning Pronunciations from Unlabeled Evidence
ERIC Educational Resources Information Center
Reddy, Sravana
2012-01-01
The pronunciation of a word represented in an alphabetic writing system (such as this one) is relatively transparent--but a language's sounds change over time and vary across space, while its spellings remain relatively static, resulting in some amount of divergence between the written and spoken forms. The introduction of loanwords and…
Szymanski, Witold G.; Kierszniowska, Sylwia; Schulze, Waltraud X.
2013-01-01
Plasma membrane microdomains are features based on the physical properties of the lipid and sterol environment and have particular roles in signaling processes. Extracting sterol-enriched membrane microdomains from plant cells for proteomic analysis is a difficult task mainly due to multiple preparation steps and sources for contaminations from other cellular compartments. The plasma membrane constitutes only about 5-20% of all the membranes in a plant cell, and therefore isolation of highly purified plasma membrane fraction is challenging. A frequently used method involves aqueous two-phase partitioning in polyethylene glycol and dextran, which yields plasma membrane vesicles with a purity of 95% 1. Sterol-rich membrane microdomains within the plasma membrane are insoluble upon treatment with cold nonionic detergents at alkaline pH. This detergent-resistant membrane fraction can be separated from the bulk plasma membrane by ultracentrifugation in a sucrose gradient 2. Subsequently, proteins can be extracted from the low density band of the sucrose gradient by methanol/chloroform precipitation. Extracted protein will then be trypsin digested, desalted and finally analyzed by LC-MS/MS. Our extraction protocol for sterol-rich microdomains is optimized for the preparation of clean detergent-resistant membrane fractions from Arabidopsis thaliana cell cultures. We use full metabolic labeling of Arabidopsis thaliana suspension cell cultures with K15NO3 as the only nitrogen source for quantitative comparative proteomic studies following biological treatment of interest 3. By mixing equal ratios of labeled and unlabeled cell cultures for joint protein extraction the influence of preparation steps on final quantitative result is kept at a minimum. Also loss of material during extraction will affect both control and treatment samples in the same way, and therefore the ratio of light and heave peptide will remain constant. In the proposed method either labeled or unlabeled cell culture undergoes a biological treatment, while the other serves as control 4. PMID:24121251
Trettin, Arne; Jordan, Jens; Tsikas, Dimitrios
2014-09-01
Paracetamol (acetaminophen, APAP) is a commonly used analgesic drug. Known paracetamol metabolites include the glucuronide, sulfate and mercapturate. N-Acetyl-benzoquinonimine (NAPQI) is considered the toxic intermediate metabolite of paracetamol. In vitro and in vivo studies indicate that paracetamol is also metabolized to additional poorly characterized metabolites. For example, metabolomic studies in urine samples of APAP-treated mice revealed metabolites such as APAP-sulfate-APAP and APAP-S-S-APAP in addition to the classical phase II metabolites. Here, we report on the development and application of LC-MS and LC-MS/MS approaches to study reactions of unlabelled and (2)H-labelled APAP with unlabelled and (15)N-labelled nitrite in aqueous phosphate buffers (pH 7.4) upon their immersion into liquid nitrogen (-196°C). In mechanistic studies, these reactions were also studied in aqueous buffer prepared in (18)O-labelled water. LC-MS and LC-MS/MS analyses were performed on a reverse-phase material (C18) using gradient elution (2mM ammonium acetate/acetonitrile), in positive and negative electrospray mode. We identified a series of APAP metabolites including di-, tri- and tetra-APAP, mono- and di-nitro-APAP and nitric ester of di-APAP. Our study indicates that nitrite induces oxidation, i.e., polymerization and nitration of APAP, when buffered APAP/nitrite solutions are immersed into liquid nitrogen. These reactions are specific for nitrite with respect to nitrate and do not proceed via intermediate formation of NAPQI. Potassium ions and physiological saline but not thiols inhibit nitrite- and shock-freeze-induced reactions of paracetamol. The underlying mechanism likely involves in situ formation of NO2 radicals from nitrite secondary to profound pH reduction (down to pH 1) and disproportionation. Polymeric paracetamol species can be analyzed as pentafluorobenzyl derivatives by LC-MS but not by GC-MS. Copyright © 2013 Elsevier B.V. All rights reserved.
Aung, Winn; Tsuji, Atsushi B.; Sudo, Hitomi; Sugyo, Aya; Ukai, Yoshinori; Kouda, Katsushi; Kurosawa, Yoshikazu; Furukawa, Takako; Saga, Tsuneo
2016-01-01
The contribution of integrin α6β4 (α6β4) overexpression to the pancreatic cancer invasion and metastasis has been previously shown. We have reported immunotargeting of α6β4 for radionuclide-based and near-infrared fluorescence imaging in a pancreatic cancer model. In this study, we prepared yttrium-90 labeled anti-α6β4 antibody (90Y-ITGA6B4) and evaluated its radioimmunotherapeutic efficacy against pancreatic cancer xenografts in nude mice. Mice bearing xenograft tumors were randomly divided into 5 groups: (1) single administration of 90Y-ITGA6B4 (3.7MBq), (2) double administrations of 90Y-ITGA6B4 with once-weekly schedule (3.7MBq × 2), (3) single administration of unlabeled ITGA6B4, (4) double administrations of unlabeled ITGA6B4 with once-weekly schedule and (5) the untreated control. Biweekly tumor volume measurements and immunohistochemical analyses of tumors at 2 days post-administration were performed to monitor the response to treatments. To assess the toxicity, body weight was measured biweekly. Additionally, at 27 days post-administration, blood samples were collected through cardiac puncture, and hematological parameters, hepatic and renal functions were analyzed. Both 90Y-ITGA6B4 treatment groups showed reduction in tumor volumes (P < 0.04), decreased cell proliferation marker Ki-67-positive cells and increased DNA damage marker p-H2AX-positive cells, compared with the other groups. Mice treated with double administrations of 90Y-ITGA6B4, exhibited myelosuppression. There were no significant differences in hepatic and renal functions between the 2 treatment groups and the other groups. Our results suggest that 90Y-ITGA6B4 is a promising radioimmunotherapeutic agent against α6β4 overexpressing tumors. In the future studies, dose adjustment for fractionated RIT should be considered carefully in order to get the optimal effect while avoiding myelotoxicity. PMID:27246980
Herrmann, Elena; Young, Wayne; Reichert-Grimm, Verena; Weis, Severin; Riedel, Christian U.; Rosendale, Douglas; Stoklosinski, Halina; Hunt, Martin; Egert, Markus
2018-01-01
Resistant starch (RS) is the digestion resistant fraction of complex polysaccharide starch. By reaching the large bowel, RS can function as a prebiotic carbohydrate, i.e., it can shape the structure and activity of bowel bacterial communities towards a profile that confers health benefits. However, knowledge about the fate of RS in complex intestinal communities and the microbial members involved in its degradation is limited. In this study, 16S ribosomal RNA (rRNA)-based stable isotope probing (RNA-SIP) was used to identify mouse bowel bacteria involved in the assimilation of RS or its derivatives directly in their natural gut habitat. Stable-isotope [U13C]-labeled native potato starch was administrated to mice, and caecal contents were collected before 0 h and 2 h and 4 h after administration. ‘Heavy’, isotope-labeled [13C]RNA species, presumably derived from bacteria that have metabolized the labeled starch, were separated from ‘light’, unlabeled [12C]RNA species by fractionation of isolated total RNA in isopycnic-density gradients. Inspection of different density gradients showed a continuous increase in ‘heavy’ 16S rRNA in caecal samples over the course of the experiment. Sequencing analyses of unlabeled and labeled 16S amplicons particularly suggested a group of unclassified Clostridiales, Dorea, and a few other taxa (Bacteroides, Turicibacter) to be most actively involved in starch assimilation in vivo. In addition, metabolic product analyses revealed that the predominant 13C-labeled short chain fatty acid (SCFA) in caecal contents produced from the [U13C] starch was butyrate. For the first time, this study provides insights into the metabolic transformation of RS by intestinal bacterial communities directly within a gut ecosystem, which will finally help to better understand its prebiotic potential and possible applications in human health. PMID:29415499
Cy5 total protein normalization in Western blot analysis.
Hagner-McWhirter, Åsa; Laurin, Ylva; Larsson, Anita; Bjerneld, Erik J; Rönn, Ola
2015-10-01
Western blotting is a widely used method for analyzing specific target proteins in complex protein samples. Housekeeping proteins are often used for normalization to correct for uneven sample loads, but these require careful validation since expression levels may vary with cell type and treatment. We present a new, more reliable method for normalization using Cy5-prelabeled total protein as a loading control. We used a prelabeling protocol based on Cy5 N-hydroxysuccinimide ester labeling that produces a linear signal response. We obtained a low coefficient of variation (CV) of 7% between the ratio of extracellular signal-regulated kinase (ERK1/2) target to Cy5 total protein control signals over the whole loading range from 2.5 to 20.0μg of Chinese hamster ovary cell lysate protein. Corresponding experiments using actin or tubulin as controls for normalization resulted in CVs of 13 and 18%, respectively. Glyceraldehyde-3-phosphate dehydrogenase did not produce a proportional signal and was not suitable for normalization in these cells. A comparison of ERK1/2 signals from labeled and unlabeled samples showed that Cy5 prelabeling did not affect antibody binding. By using total protein normalization we analyzed PP2A and Smad2/3 levels with high confidence. Copyright © 2015 Elsevier Inc. All rights reserved.
Detection of plant-based adulterants in turmeric powder using DNA barcoding.
Parvathy, V A; Swetha, V P; Sheeja, T E; Sasikumar, B
2015-01-01
In its powdered form, turmeric [Curcuma longa L. (Zingiberaceae)], a spice of medical importance, is often adulterated lowering its quality. The study sought to detect plant-based adulterants in traded turmeric powder using DNA barcoding. Accessions of Curcuma longa L., Curcuma zedoaria Rosc. (Zingiberaceae), and cassava starch served as reference samples. Three barcoding loci, namely ITS, rbcL, and matK, were used for PCR amplification of the reference samples and commercial samples representing 10 different companies. PCR success rate, sequencing efficiency, occurrence of SNPs, and BLAST analysis were used to assess the potential of the barcoding loci in authenticating the traded samples of turmeric. The PCR and sequencing success of the loci rbcL and ITS were found to be 100%, whereas matK showed no amplification. ITS proved to be the ideal locus because it showed greater variability than rbcL in discriminating the Curcuma species. The presence of C. zedoaria could be detected in one of the samples whereas cassava starch, wheat, barley, and rye in other two samples although the label claimed nothing other than turmeric powder in the samples. Unlabeled materials in turmeric powder are considered as adulterants or fillers, added to increase the bulk weight and starch content of the commodity for economic gains. These adulterants pose potential health hazards to consumers who are allergic to these plants, lowering the product's medicinal value and belying the claim that the product is gluten free. The study proved DNA barcoding as an efficient tool for testing the integrity and the authenticity of commercial products of turmeric.
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
Cytopathological image analysis using deep-learning networks in microfluidic microscopy.
Gopakumar, G; Hari Babu, K; Mishra, Deepak; Gorthi, Sai Siva; Sai Subrahmanyam, Gorthi R K
2017-01-01
Cytopathologic testing is one of the most critical steps in the diagnosis of diseases, including cancer. However, the task is laborious and demands skill. Associated high cost and low throughput drew considerable interest in automating the testing process. Several neural network architectures were designed to provide human expertise to machines. In this paper, we explore and propose the feasibility of using deep-learning networks for cytopathologic analysis by performing the classification of three important unlabeled, unstained leukemia cell lines (K562, MOLT, and HL60). The cell images used in the classification are captured using a low-cost, high-throughput cell imaging technique: microfluidics-based imaging flow cytometry. We demonstrate that without any conventional fine segmentation followed by explicit feature extraction, the proposed deep-learning algorithms effectively classify the coarsely localized cell lines. We show that the designed deep belief network as well as the deeply pretrained convolutional neural network outperform the conventionally used decision systems and are important in the medical domain, where the availability of labeled data is limited for training. We hope that our work enables the development of a clinically significant high-throughput microfluidic microscopy-based tool for disease screening/triaging, especially in resource-limited settings.
Semisupervised learning using denoising autoencoders for brain lesion detection and segmentation.
Alex, Varghese; Vaidhya, Kiran; Thirunavukkarasu, Subramaniam; Kesavadas, Chandrasekharan; Krishnamurthi, Ganapathy
2017-10-01
The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients ([Formula: see text], 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.
From image captioning to video summary using deep recurrent networks and unsupervised segmentation
NASA Astrophysics Data System (ADS)
Morosanu, Bogdan-Andrei; Lemnaru, Camelia
2018-04-01
Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.
Classification of cirrhotic liver in Gadolinium-enhanced MR images
NASA Astrophysics Data System (ADS)
Lee, Gobert; Uchiyama, Yoshikazu; Zhang, Xuejun; Kanematsu, Masayuki; Zhou, Xiangrong; Hara, Takeshi; Kato, Hiroki; Kondo, Hiroshi; Fujita, Hiroshi; Hoshi, Hiroaki
2007-03-01
Cirrhosis of the liver is characterized by the presence of widespread nodules and fibrosis in the liver. The fibrosis and nodules formation causes distortion of the normal liver architecture, resulting in characteristic texture patterns. Texture patterns are commonly analyzed with the use of co-occurrence matrix based features measured on regions-of-interest (ROIs). A classifier is subsequently used for the classification of cirrhotic or non-cirrhotic livers. Problem arises if the classifier employed falls into the category of supervised classifier which is a popular choice. This is because the 'true disease states' of the ROIs are required for the training of the classifier but is, generally, not available. A common approach is to adopt the 'true disease state' of the liver as the 'true disease state' of all ROIs in that liver. This paper investigates the use of a nonsupervised classifier, the k-means clustering method in classifying livers as cirrhotic or non-cirrhotic using unlabelled ROI data. A preliminary result with a sensitivity and specificity of 72% and 60%, respectively, demonstrates the feasibility of using the k-means non-supervised clustering method in generating a characteristic cluster structure that could facilitate the classification of cirrhotic and non-cirrhotic livers.
Learning a Novel Detection Metric for the Detection of O’Connell Effect Eclipsing Binaries
NASA Astrophysics Data System (ADS)
Johnston, Kyle; Haber, Rana; Knote, Matthew; Caballero-Nieves, Saida Maria; Peter, Adrian; Petit, Véronique
2018-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. Here we focus on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern detection algorithm for the targeted identification of eclipsing binaries which demonstrate a feature known as the O’Connell Effect. A methodology for the reduction of stellar variable observations (time-domain data) into Distribution Fields (DF) is presented. Push-Pull metric learning, a variant of LMNN learning, is used to generate a learned distance metric for the specific detection problem proposed. The metric will be trained on a set of a labelled Kepler eclipsing binary data, in particular systems showing the O’Connell effect. Performance estimates will be presented, as well the results of the detector applied to an unlabeled Kepler EB data set; this work is a crucial step in the upcoming era of big data from the next generation of big telescopes, such as LSST.
An improved SRC method based on virtual samples for face recognition
NASA Astrophysics Data System (ADS)
Fu, Lijun; Chen, Deyun; Lin, Kezheng; Li, Ao
2018-07-01
The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.
Khoo, Bee Luan; Warkiani, Majid Ebrahimi; Tan, Daniel Shao-Weng; Bhagat, Ali Asgar S; Irwin, Darryl; Lau, Dawn Pingxi; Lim, Alvin S T; Lim, Kiat Hon; Krisna, Sai Sakktee; Lim, Wan-Teck; Yap, Yoon Sim; Lee, Soo Chin; Soo, Ross A; Han, Jongyoon; Lim, Chwee Teck
2014-01-01
Circulating tumor cells (CTCs) are cancer cells that can be isolated via liquid biopsy from blood and can be phenotypically and genetically characterized to provide critical information for guiding cancer treatment. Current analysis of CTCs is hindered by the throughput, selectivity and specificity of devices or assays used in CTC detection and isolation. Here, we enriched and characterized putative CTCs from blood samples of patients with both advanced stage metastatic breast and lung cancers using a novel multiplexed spiral microfluidic chip. This system detected putative CTCs under high sensitivity (100%, n = 56) (Breast cancer samples: 12-1275 CTCs/ml; Lung cancer samples: 10-1535 CTCs/ml) rapidly from clinically relevant blood volumes (7.5 ml under 5 min). Blood samples were completely separated into plasma, CTCs and PBMCs components and each fraction were characterized with immunophenotyping (Pan-cytokeratin/CD45, CD44/CD24, EpCAM), fluorescence in-situ hybridization (FISH) (EML4-ALK) or targeted somatic mutation analysis. We used an ultra-sensitive mass spectrometry based system to highlight the presence of an EGFR-activating mutation in both isolated CTCs and plasma cell-free DNA (cf-DNA), and demonstrate concordance with the original tumor-biopsy samples. We have clinically validated our multiplexed microfluidic chip for the ultra high-throughput, low-cost and label-free enrichment of CTCs. Retrieved cells were unlabeled and viable, enabling potential propagation and real-time downstream analysis using next generation sequencing (NGS) or proteomic analysis.
Optimization of the radioimmunoassays for measuring fentanyl and alfentanil in human serum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuettler, J.; White, P.F.
Measurement of serum fentanyl and alfentanil concentrations by radioimmunoassay (RIA) may result in significant errors and high variability when the technique described in the available fentanyl and alfentanil RIA kits is used. The authors found a 29-94% overestimation of measured fentanyl and alfentanil serum levels when 3H-fentanyl or 3H-alfentanil was added lastly to the mixture of antiserum and sample. This finding is related to a reduction in binding sites for the labeled compounds after preincubation of sample and antiserum. If this sequence is used, it becomes necessary to extend the incubation period up to 6 h for fentanyl and upmore » to 10 h for alfentanil in order to achieve equilibration between unlabeled and labeled drug with respect to antiserum binding. However, when antiserum is added lastly to the mixture of sample and labeled drug, measurement accuracy and precision for fentanyl and alfentanil serum concentrations are enhanced markedly. In addition, it is important to perform the calibration curves and sample measurements using the same medium (i.e., serum alone or a serum/buffer dilution). In summary, to optimize the RIA for fentanyl and alfentanil, the authors recommend the following: 1) adding the antiserum lastly to the mixture of sample and labeled drug; 2) performing calibration curves using patient's blank serum when possible; 3) carefully examining and standardizing each step of the RIA procedure to reduce variability, and, finally; 4) comparing results with those of other established RIA laboratories.« less
Efficient dynamic graph construction for inductive semi-supervised learning.
Dornaika, F; Dahbi, R; Bosaghzadeh, A; Ruichek, Y
2017-10-01
Most of graph construction techniques assume a transductive setting in which the whole data collection is available at construction time. Addressing graph construction for inductive setting, in which data are coming sequentially, has received much less attention. For inductive settings, constructing the graph from scratch can be very time consuming. This paper introduces a generic framework that is able to make any graph construction method incremental. This framework yields an efficient and dynamic graph construction method that adds new samples (labeled or unlabeled) to a previously constructed graph. As a case study, we use the recently proposed Two Phase Weighted Regularized Least Square (TPWRLS) graph construction method. The paper has two main contributions. First, we use the TPWRLS coding scheme to represent new sample(s) with respect to an existing database. The representative coefficients are then used to update the graph affinity matrix. The proposed method not only appends the new samples to the graph but also updates the whole graph structure by discovering which nodes are affected by the introduction of new samples and by updating their edge weights. The second contribution of the article is the application of the proposed framework to the problem of graph-based label propagation using multiple observations for vision-based recognition tasks. Experiments on several image databases show that, without any significant loss in the accuracy of the final classification, the proposed dynamic graph construction is more efficient than the batch graph construction. Copyright © 2017 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
D-glucaric acid was characterized in solution by comparing NMR spectra from the isotopically unlabeled molecule with those from D-glucaric acid labeled with deuterium or carbon-13 atoms. The NMR studies provided unequivocal assignments for all carbon atoms and non-hydroxyl protons of the molecule. ...
Genotyping Sugarcane for the Brown Rust Resistance Locus Bru1 Using Unlabeled Probe Melting
USDA-ARS?s Scientific Manuscript database
Brown rust, caused by the fungus Puccinia melanocephala, is a major disease of sugarcane (Saccharum spp.) in Florida, Louisiana, and other sugarcane growing regions. The Bru1 locus has been used as a durable and effective source of resistance, and markers are available to select for the trait. The...
NASA Astrophysics Data System (ADS)
Podlesak, David; Manner, Virginia; Amato, Ronald; Dattelbaum, Dana; Gusavsen, Richard; Huber, Rachel
2017-06-01
Detonation of HE is an exothermic process whereby metastable complex molecules are converted to simple stable molecules such as H2 O, N2, CO, CO2, and solid carbon. The solid carbon contains various allotropes such as detonation nanodiamonds, graphite, and amorphous carbon. It is well known that certain HE formulations such as Composition B (60% RDX, 40% TNT) produce greater amounts of solid carbon than other more oxygen-balanced formulations. To develop a greater understanding of how formulation and environment influence solid carbon formation, we synthesized TNT and RDX with 13 C and 15 N at levels slightly above natural abundance levels. Synthesized RDX and TNT were mixed at a ratio of 60:40 to form Composition B and solid carbon residues were collected from detonations of isotopically-labeled as well as un-labelled Composition B. The raw HE and detonation residues were analyzed isotopically for C, N, O isotopic compositions. We will discuss differences between treatments groups as a function of formulation and environment. LA-UR - 17-21266.
Mass Defect Labeling of Cysteine for Improving Peptide Assignment in Shotgun Proteomic Analyses
Hernandez, Hilda; Niehauser, Sarah; Boltz, Stacey A.; Gawandi, Vijay; Phillips, Robert S.; Amster, I. Jonathan
2006-01-01
A method for improving the identification of peptides in a shotgun proteome analysis using accurate mass measurement has been developed. The improvement is based upon the derivatization of cysteine residues with a novel reagent, 2,4-dibromo-(2′-iodo)acetanilide. The derivitization changes the mass defect of cysteine-containing proteolytic peptides in a manner that increases their identification specificity. Peptide masses were measured using matrix-assisted laser desorption/ionization Fourier transform ion cyclotron mass spectrometry. Reactions with protein standards show that the derivatization of cysteine is rapid and quantitative, and the data suggest that the derivatized peptides are more easily ionized or detected than unlabeled cysteine-containing peptides. The reagent was tested on a 15N-metabolically labeled proteome from M. maripaludis. Proteins were identified by their accurate mass values and from their nitrogen stoichiometry. A total of 47% of the labeled peptides are identified versus 27% for the unlabeled peptides. This procedure permits the identification of proteins from the M. maripaludis proteome that are not usually observed by the standard protocol and shows that better protein coverage is obtained with this methodology. PMID:16689545
Characterization of Cadmium Uptake by Plant Tissue 12
Cutler, Jay M.; Rains, Donald W.
1974-01-01
The uptake of cadmium by excised root tissue of barley (Hordeum vulgare L. cv. Arivat) was investigated with respect to kinetics, concentration, and interactions with various cations. The role of metabolism in Cd absorption was examined using a range of temperatures, anaerobic treatments, and chemical inhibitors. The uptake and distribution of Cd in intact barley plants was also determined. A large fraction of the Cd taken up by excised barley roots was apparently the result of exchange adsorption and was displaced by subsequent desorption with unlabeled Cd, Zn, Cu, or Hg. Another fraction of Cd which could not be displaced by desorption in unlabeled Cd was thought to result from strong irreversible binding of Cd, perhaps on sites of the cell wall. The fraction of the Cd taken up beyond that by exchange adsorption by fresh roots was a linear function of temperature, and inhibited by conditions of low oxygen and by the presence of 2,4-dinitrophenol. It was concluded that this fraction of Cd entered excised barley roots by diffusion. Diffusion, when followed by sequestering, probably accounts for the accumulation of Cd observed in intact barley plants. PMID:16658840
Burnum-Johnson, Kristin E.; Nie, Song; Casey, Cameron P.; Monroe, Matthew E.; Orton, Daniel J.; Ibrahim, Yehia M.; Gritsenko, Marina A.; Clauss, Therese R. W.; Shukla, Anil K.; Moore, Ronald J.; Purvine, Samuel O.; Shi, Tujin; Qian, Weijun; Liu, Tao; Baker, Erin S.; Smith, Richard D.
2016-01-01
Current proteomic approaches include both broad discovery measurements and quantitative targeted analyses. In many cases, discovery measurements are initially used to identify potentially important proteins (e.g. candidate biomarkers) and then targeted studies are employed to quantify a limited number of selected proteins. Both approaches, however, suffer from limitations. Discovery measurements aim to sample the whole proteome but have lower sensitivity, accuracy, and quantitation precision than targeted approaches, whereas targeted measurements are significantly more sensitive but only sample a limited portion of the proteome. Herein, we describe a new approach that performs both discovery and targeted monitoring (DTM) in a single analysis by combining liquid chromatography, ion mobility spectrometry and mass spectrometry (LC-IMS-MS). In DTM, heavy labeled target peptides are spiked into tryptic digests and both the labeled and unlabeled peptides are detected using LC-IMS-MS instrumentation. Compared with the broad LC-MS discovery measurements, DTM yields greater peptide/protein coverage and detects lower abundance species. DTM also achieved detection limits similar to selected reaction monitoring (SRM) indicating its potential for combined high quality discovery and targeted analyses, which is a significant step toward the convergence of discovery and targeted approaches. PMID:27670688
Fettke, Joerg; Leifels, Lydia; Brust, Henrike; Herbst, Karoline; Steup, Martin
2012-01-01
Parenchyma cells from tubers of Solanum tuberosum L. convert several externally supplied sugars to starch but the rates vary largely. Conversion of glucose 1-phosphate to starch is exceptionally efficient. In this communication, tuber slices were incubated with either of four solutions containing equimolar [U-14C]glucose 1-phosphate, [U-14C]sucrose, [U-14C]glucose 1-phosphate plus unlabelled equimolar sucrose or [U-14C]sucrose plus unlabelled equimolar glucose 1-phosphate. 14C-incorporation into starch was monitored. In slices from freshly harvested tubers each unlabelled compound strongly enhanced 14C incorporation into starch indicating closely interacting paths of starch biosynthesis. However, enhancement disappeared when the tubers were stored. The two paths (and, consequently, the mutual enhancement effect) differ in temperature dependence. At lower temperatures, the glucose 1-phosphate-dependent path is functional, reaching maximal activity at approximately 20 °C but the flux of the sucrose-dependent route strongly increases above 20 °C. Results are confirmed by in vitro experiments using [U-14C]glucose 1-phosphate or adenosine-[U-14C]glucose and by quantitative zymograms of starch synthase or phosphorylase activity. In mutants almost completely lacking the plastidial phosphorylase isozyme(s), the glucose 1-phosphate-dependent path is largely impeded. Irrespective of the size of the granules, glucose 1-phosphate-dependent incorporation per granule surface area is essentially equal. Furthermore, within the granules no preference of distinct glucosyl acceptor sites was detectable. Thus, the path is integrated into the entire granule biosynthesis. In vitro 14C-incorporation into starch granules mediated by the recombinant plastidial phosphorylase isozyme clearly differed from the in situ results. Taken together, the data clearly demonstrate that two closely but flexibly interacting general paths of starch biosynthesis are functional in potato tuber cells. PMID:22378944
Fettke, Joerg; Leifels, Lydia; Brust, Henrike; Herbst, Karoline; Steup, Martin
2012-05-01
Parenchyma cells from tubers of Solanum tuberosum L. convert several externally supplied sugars to starch but the rates vary largely. Conversion of glucose 1-phosphate to starch is exceptionally efficient. In this communication, tuber slices were incubated with either of four solutions containing equimolar [U-¹⁴C]glucose 1-phosphate, [U-¹⁴C]sucrose, [U-¹⁴C]glucose 1-phosphate plus unlabelled equimolar sucrose or [U-¹⁴C]sucrose plus unlabelled equimolar glucose 1-phosphate. C¹⁴-incorporation into starch was monitored. In slices from freshly harvested tubers each unlabelled compound strongly enhanced ¹⁴C incorporation into starch indicating closely interacting paths of starch biosynthesis. However, enhancement disappeared when the tubers were stored. The two paths (and, consequently, the mutual enhancement effect) differ in temperature dependence. At lower temperatures, the glucose 1-phosphate-dependent path is functional, reaching maximal activity at approximately 20 °C but the flux of the sucrose-dependent route strongly increases above 20 °C. Results are confirmed by in vitro experiments using [U-¹⁴C]glucose 1-phosphate or adenosine-[U-¹⁴C]glucose and by quantitative zymograms of starch synthase or phosphorylase activity. In mutants almost completely lacking the plastidial phosphorylase isozyme(s), the glucose 1-phosphate-dependent path is largely impeded. Irrespective of the size of the granules, glucose 1-phosphate-dependent incorporation per granule surface area is essentially equal. Furthermore, within the granules no preference of distinct glucosyl acceptor sites was detectable. Thus, the path is integrated into the entire granule biosynthesis. In vitro C¹⁴C-incorporation into starch granules mediated by the recombinant plastidial phosphorylase isozyme clearly differed from the in situ results. Taken together, the data clearly demonstrate that two closely but flexibly interacting general paths of starch biosynthesis are functional in potato tuber cells.
Evidence for two distinct intracellular pools of inorganic sulfate in Penicillium notatum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, D.R.; Segel, I.H.
1985-06-01
A strain of Penicillium notatum unable to metabolize inorganic sulfate can accumulate sulfate internally to an apparent equilibrium concentration 10/sup 5/ times greater than that remaining in the medium. The apparent K/sub eq/ is near constant at all initial external sulfate concentrations below that which would eventually exceed the internal capacity of the cells. Under equilibrium conditions of zero net flux, external /sup 35/SO/sub 4//sup 2 -/ exchanges with internal, unlabeled SO/sub 4//sup 2 -/ at a rate consistent with the kinetic constants with the sulfate transport system. Efflux experiments demonstrated that sulfate occupies two distinct intracellular pools. Pool 1more » is characterized by the rapid release of /sup 35/SO/sub 4//sup 2 -/ when the suspension of preloaded cells is adjusted to 10 mM azide at pH 8.4 (t/sub 1/2/, 0.38 min). /sup 35/SO/sub 4//sup 2 -/ in pool 1 also rapidly exchanges with unlabeled medium sulfate. Pool 2 is characterized by the slow release of /sup 35/SO/sub 4//sup 2 -/ induced by azide at pH 8.4 or unlabeled sulfate (t/sub 1/2/, 32 to 49 min). Early in the /sup 35/SO/sub 4//sup 2 -/ accumulation process, up to 78% of the total transported substrate is found in pool 1. At equilibrium, pool 1 accounts for only about 2% of the total accumulated /sup 35/SO/sub 4//sup 2 -/. Monensin (33 ..mu..m) accelerates the transfer of /sup 35/SO/sub 4//sup 2 -/ from pool 1 to pool 2. Valinomycin (0.2 ..mu..M) and tetraphynylboron/sup -/ (1 mM) retard the transfer of /sup 35/SO/sub 4//sup 2 -/ from pool 1 to pool 2. Pool 2 may reside in a vacuole or other intracellular organelle. A model for the transfer of sulfate from pool 1 to pool 2 is presented.« less
Law, Marilyn P; Wagner, Stefan; Kopka, Klaus; Pike, Victor W; Schober, Otmar; Schäfers, Michael
2008-01-01
Radioligand binding studies show that beta(1)-adrenoceptor (beta(1)-AR) density may be reduced in heart disease without down regulation of beta(2)-ARs. Radioligands are available for measuring total beta-AR density non-invasively with clinical positron emission tomography (PET) but none are selective for beta(1)- or beta(2)-ARs. The aim was to evaluate ICI 89,406, a beta(1)-AR-selective antagonist amenable to labelling with positron emitters, for PET. The S-enantiomer of an [O-methyl-(11)C] derivative of ICI 89,406 ((S)-[(11)C]ICI-OMe) was synthesised. Tissue radioactivity after i.v. injection of (S)-[(11)C]ICI-OMe (< 2 nmol x kg(-1)) into adult Wistar rats was assessed by small animal PET and post mortem dissection. Metabolism was assessed by HPLC of extracts prepared from plasma and tissues and by measuring [(11)C]CO(2) in exhaled air. The heart was visualised by PET after injection of (S)-[(11)C]ICI-OMe but neither unlabelled (S)-ICI-OMe nor propranolol (non-selective beta-AR antagonist) injected 15 min after (S)-[(11)C]ICI-OMe affected myocardial radioactivity. Ex vivo dissection showed that injecting unlabelled (S)-ICI-OMe, propranolol or CGP 20712A (beta(1)-selective AR antagonist) at high dose (> 2 mumol x kg(-1)) before (S)-[(11)C]ICI-OMe had a small effect on myocardial radioactivity. HPLC demonstrated that radioactivity in myocardium was due to unmetabolised (S)-[(11)C]ICI-OMe although (11)C-labelled metabolites rapidly appeared in plasma and liver and [(11)C]CO(2) was detected in exhaled air. Myocardial uptake of (S)-[(11)C]ICI-OMe after i.v. injection was low, possibly due to rapid metabolism in other tissues. Injection of unlabelled ligand or beta-AR antagonists had little effect indicating that binding was mainly to non-specific myocardial sites, thus precluding the use of (S)-[(11)C]ICI-OMe to assess beta(1)-ARs with PET.
Classifying GABAergic interneurons with semi-supervised projected model-based clustering.
Mihaljević, Bojan; Benavides-Piccione, Ruth; Guerra, Luis; DeFelipe, Javier; Larrañaga, Pedro; Bielza, Concha
2015-09-01
A recently introduced pragmatic scheme promises to be a useful catalog of interneuron names. We sought to automatically classify digitally reconstructed interneuronal morphologies according to this scheme. Simultaneously, we sought to discover possible subtypes of these types that might emerge during automatic classification (clustering). We also investigated which morphometric properties were most relevant for this classification. A set of 118 digitally reconstructed interneuronal morphologies classified into the common basket (CB), horse-tail (HT), large basket (LB), and Martinotti (MA) interneuron types by 42 of the world's leading neuroscientists, quantified by five simple morphometric properties of the axon and four of the dendrites. We labeled each neuron with the type most commonly assigned to it by the experts. We then removed this class information for each type separately, and applied semi-supervised clustering to those cells (keeping the others' cluster membership fixed), to assess separation from other types and look for the formation of new groups (subtypes). We performed this same experiment unlabeling the cells of two types at a time, and of half the cells of a single type at a time. The clustering model is a finite mixture of Gaussians which we adapted for the estimation of local (per-cluster) feature relevance. We performed the described experiments on three different subsets of the data, formed according to how many experts agreed on type membership: at least 18 experts (the full data set), at least 21 (73 neurons), and at least 26 (47 neurons). Interneurons with more reliable type labels were classified more accurately. We classified HT cells with 100% accuracy, MA cells with 73% accuracy, and CB and LB cells with 56% and 58% accuracy, respectively. We identified three subtypes of the MA type, one subtype of CB and LB types each, and no subtypes of HT (it was a single, homogeneous type). We got maximum (adapted) Silhouette width and ARI values of 1, 0.83, 0.79, and 0.42, when unlabeling the HT, CB, LB, and MA types, respectively, confirming the quality of the formed cluster solutions. The subtypes identified when unlabeling a single type also emerged when unlabeling two types at a time, confirming their validity. Axonal morphometric properties were more relevant that dendritic ones, with the axonal polar histogram length in the [π, 2π) angle interval being particularly useful. The applied semi-supervised clustering method can accurately discriminate among CB, HT, LB, and MA interneuron types while discovering potential subtypes, and is therefore useful for neuronal classification. The discovery of potential subtypes suggests that some of these types are more heterogeneous that previously thought. Finally, axonal variables seem to be more relevant than dendritic ones for distinguishing among the CB, HT, LB, and MA interneuron types. Copyright © 2015 Elsevier B.V. All rights reserved.
Long-term direct visualization of passively transferred fluorophore-conjugated antibodies.
Schneider, Jeffrey R; Carias, Ann M; Bastian, Arangaserry R; Cianci, Gianguido C; Kiser, Patrick F; Veazey, Ronald S; Hope, Thomas J
2017-11-01
The use of therapeutic antibodies, delivered by intravenous (IV) instillation, is a rapidly expanding area of biomedical treatment for a variety of conditions. However, little is known about how the antibodies are anatomically distributed following infusion and the underlying mechanism mediating therapeutic antibody distribution to specific anatomical sites remains to be elucidated. Current efforts utilize low resolution and sensitivity methods such as ELISA and indirect labeling imaging techniques, which often leads to high background and difficulty in assessing biodistribution. Here, using the in vivo non-human primate model, we demonstrate that it is possible to utilize the fluorophores Cy5 and Cy3 directly conjugated to antibodies for direct visualization and quantification of passively transferred antibodies in plasma, tissue, and in mucosal secretions. Antibodies were formulated with 1-2 fluorophores per antibody to minimally influence antibody function. Fluorophore conjugated Gamunex-C (pooled human IgG) were tested for binding to protein A, via surface plasmon resonance, and showed similar levels of binding when compared to unlabeled Gamunex-C. In order to assess the effect fluorophore labeling has on turnover and localization, rhesus macaques were IV infused with either labeled or unlabeled Gamunex-C. Plasma, vaginal Weck-Cel fluid, cervicovaginal mucus, and vaginal/rectal tissue biopsies were collected up to 8weeks. Similar turnover and biodistribution was observed between labeled and unlabeled antibodies, showing that the labeling process did not have an obvious deleterious effect on localization or turnover. Cy5 and Cy3 labeled antibodies were readily detected in the same pattern regardless of fluorophore. Tissue distribution was measured in macaque vaginal and rectal biopsies. The labeled antibody in macaque biopsies was found to have similar biodistribution pattern to endogenous antibodies in macaque and human tissues. In the vaginal and rectal mucosa, endogenous and infused antibodies were found primarily within the lamina propria. In the mucosal squamous epithelium of the vaginal vault, significant antibody was also observed in a striated pattern in the superficial, nonviable, stratum corneum. Endogenous antibody distribution in both human and macaque squamous tissues exhibited a similar pattern as seen with the labeled and unlabeled antibodies. This proof-of-principle study reveals that the labeled antibody is stable and physiologically similar relative to endogenous antibody setting the stage for future work to better understand the mechanisms of how antibodies reach unique anatomical sites. Direct visualization of fluorophore-conjugated antibodies following passive infusion can be utilized to assess the kinetics of biodistribution of infused antibodies and may be a useful approach to monitor and predict efficacy of therapeutic antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pochan, M.J.; Massey, M.J.
1979-02-01
This report discusses the results of actual raw product gas sampling efforts and includes: Rationale for raw product gas sampling efforts; design and operation of the CMU gas sampling train; development and analysis of a sampling train data base; and conclusions and future application of results. The results of sampling activities at the CO/sub 2/-Acceptor and Hygas pilot plants proved that: The CMU gas sampling train is a valid instrument for characterization of environmental parameters in coal gasification gas-phase process streams; depending on the particular process configuration, the CMU gas sampling train can reduce gasifier effluent characterization activity to amore » single location in the raw product gas line; and in contrast to the slower operation of the EPA SASS Train, CMU's gas sampling train can collect representative effluent data at a rapid rate (approx. 2 points per hour) consistent with the rate of change of process variables, and thus function as a tool for process engineering-oriented analysis of environmental characteristics.« less
Appearance-based representative samples refining method for palmprint recognition
NASA Astrophysics Data System (ADS)
Wen, Jiajun; Chen, Yan
2012-07-01
The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Training. 75.338 Section 75.338 Mineral... SAFETY STANDARDS-UNDERGROUND COAL MINES Ventilation § 75.338 Training. (a) Certified persons conducting sampling shall be trained in the use of appropriate sampling equipment, procedures, location of sampling...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Training. 75.338 Section 75.338 Mineral... SAFETY STANDARDS-UNDERGROUND COAL MINES Ventilation § 75.338 Training. (a) Certified persons conducting sampling shall be trained in the use of appropriate sampling equipment, procedures, location of sampling...
NASA Technical Reports Server (NTRS)
Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.
2012-01-01
An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to the phenology, solar-view geometry, and atmospheric condition etc. factors but not actual landcover difference. Finally, we will compare the classification results from screened and unscreened training samples to assess the improvement achieved by cleaning up the training samples. Keywords:
Fidani, M; Gamberini, M C; Pasello, E; Palazzoli, F; De Iuliis, P; Montana, M; Arioli, F
2009-01-01
Proper storage conditions of biological samples are fundamental to avoid microbiological contamination that can cause chemical modifications of the target analytes. A simple liquid chromatography/tandem mass spectrometry (LC/MS/MS) method through direct injection of diluted samples, without prior extraction, was used to evaluate the stability of phase II metabolites of boldenone and testosterone (glucuronides and sulphates) in intentionally poorly stored equine urine samples. We also considered the stability of some deuterated conjugated steroids generally used as internal standards, such as deuterated testosterone and epitestosterone glucuronides, and deuterated boldenone and testosterone sulphates. The urines were kept for 1 day at room temperature, to mimic poor storage conditions, then spiked with the above steroids and kept at different temperatures (-18 degrees C, 4 degrees C, room temperature). It has been possible to confirm the instability of glucuronide compounds when added to poorly stored equine urine samples. In particular, both 17beta- and 17alpha-glucuronide steroids were exposed to hydrolysis leading to non-conjugated steroids. Only 17beta-hydroxy steroids were exposed to oxidation to their keto derivatives whereas the 17alpha-hydroxy steroids were highly stable. The sulphate compounds were completely stable. The deuterated compounds underwent the same behaviour as the unlabelled compounds. The transformations were observed in urine samples kept at room temperature and at a temperature of 4 degrees C (at a slower rate). No modifications were observed in frozen urine samples. In the light of the latter results, the immediate freezing at -18 degrees C of the collected samples and their instant analysis after thawing is the proposed procedure for preventing the transformations that occur in urine, usually due to microbiological contamination. (c) 2008 John Wiley & Sons, Ltd.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Predicting Drug-Target Interactions With Multi-Information Fusion.
Peng, Lihong; Liao, Bo; Zhu, Wen; Li, Zejun; Li, Keqin
2017-03-01
Identifying potential associations between drugs and targets is a critical prerequisite for modern drug discovery and repurposing. However, predicting these associations is difficult because of the limitations of existing computational methods. Most models only consider chemical structures and protein sequences, and other models are oversimplified. Moreover, datasets used for analysis contain only true-positive interactions, and experimentally validated negative samples are unavailable. To overcome these limitations, we developed a semi-supervised based learning framework called NormMulInf through collaborative filtering theory by using labeled and unlabeled interaction information. The proposed method initially determines similarity measures, such as similarities among samples and local correlations among the labels of the samples, by integrating biological information. The similarity information is then integrated into a robust principal component analysis model, which is solved using augmented Lagrange multipliers. Experimental results on four classes of drug-target interaction networks suggest that the proposed approach can accurately classify and predict drug-target interactions. Part of the predicted interactions are reported in public databases. The proposed method can also predict possible targets for new drugs and can be used to determine whether atropine may interact with alpha1B- and beta1- adrenergic receptors. Furthermore, the developed technique identifies potential drugs for new targets and can be used to assess whether olanzapine and propiomazine may target 5HT2B. Finally, the proposed method can potentially address limitations on studies of multitarget drugs and multidrug targets.
Matrix-enhanced degradation of p,p'-DDT during gas chromatographic analysis: A consideration
Foreman, W.T.; Gates, Paul M.
1997-01-01
Analysis of p,p‘-DDT in environmental samples requires monitoring the GC-derived breakdown of this insecticide, which produces p,p‘-DDD and/or p,p‘-DDE, both also primary environmental degradation products. A performance evaluation standard (PES) containing p,p‘-DDT but notp,p‘-DDD or p,p‘-DDE can be injected at regular intervals throughout an analytical sequence to monitor GC degradation. Some U.S. EPA methods limit GC breakdown of DDT in the PES to ≤20%. GC/MS analysis of large-volume natural water samples fortified with deuterium- and 13C-labeled p,p‘-DDT exhibited up to 65% DDT breakdown by the GC inlet. These matrix-enhanced GC degradation amounts substantially exceeded the <20% breakdown levels indicated by bracketing injections of the PES containing unlabeled and labeled DDT. Substantial matrix-enhanced GC degradation was not observed during analysis of a limited number of fractionated bed-sediment extracts containing labeled DDT. Use of isotopically labeled DDT seems to provide an effective tool for monitoring sample-specific DDT breakdown during GC/MS analysis. However, analyte co-elutions render impractical their use in GC/ECD analysis. The oc currence of matrix-enhanced GC degradation might have important implications on data quality and the resultant interpretations of the environmental degradation of DDT and other thermolabile contaminants.
Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking
Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun
2017-01-01
Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876
NASA Astrophysics Data System (ADS)
Jiang, Li; Xuan, Jianping; Shi, Tielin
2013-12-01
Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.
Learning Semantic Tags from Big Data for Clinical Text Representation.
Li, Yanpeng; Liu, Hongfang
2015-01-01
In clinical text mining, it is one of the biggest challenges to represent medical terminologies and n-gram terms in sparse medical reports using either supervised or unsupervised methods. Addressing this issue, we propose a novel method for word and n-gram representation at semantic level. We first represent each word by its distance with a set of reference features calculated by reference distance estimator (RDE) learned from labeled and unlabeled data, and then generate new features using simple techniques of discretization, random sampling and merging. The new features are a set of binary rules that can be interpreted as semantic tags derived from word and n-grams. We show that the new features significantly outperform classical bag-of-words and n-grams in the task of heart disease risk factor extraction in i2b2 2014 challenge. It is promising to see that semantics tags can be used to replace the original text entirely with even better prediction performance as well as derive new rules beyond lexical level.
Solid-phase receptor binding assay for /sup 125/I-hCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bortolussi, M.; Selmin, O.; Colombatti, A.
1987-01-01
A solid-phase radioligand-receptor assay (RRA) to measure the binding of /sup 125/I-labelled human chorionic gonadotropin (/sup 125/I-hCG) to target cell membranes has been developed. The binding of /sup 125/I-hCG to membranes immobilized on the wells of microtitration plates reached a maximum at about 3 hours at 37 degrees C, was saturable, displayed a high affinity (Ka = 2.4 X 10(9) M-1) and was specifically inhibited by unlabelled hCG. In comparison with RRAs carried out with membranes in suspension, the solid-phase RRA is significantly simpler and much faster to perform as it avoids centrifugation or filtration procedures. The solid-phase RRA wasmore » adapted profitably to process large numbers of samples at the same time. It proved particularly useful as a screening assay to detect anti-hCG monoclonal antibodies with high inhibitory activity for binding of /sup 125/I-hCG to its receptors.« less
Automated structure determination of proteins with the SAIL-FLYA NMR method.
Takeda, Mitsuhiro; Ikeya, Teppei; Güntert, Peter; Kainosho, Masatsune
2007-01-01
The labeling of proteins with stable isotopes enhances the NMR method for the determination of 3D protein structures in solution. Stereo-array isotope labeling (SAIL) provides an optimal stereospecific and regiospecific pattern of stable isotopes that yields sharpened lines, spectral simplification without loss of information, and the ability to collect rapidly and evaluate fully automatically the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as those that can be analyzed using conventional methods. Here, we describe a protocol for the preparation of SAIL proteins by cell-free methods, including the preparation of S30 extract and their automated structure analysis using the FLYA algorithm and the program CYANA. Once efficient cell-free expression of the unlabeled or uniformly labeled target protein has been achieved, the NMR sample preparation of a SAIL protein can be accomplished in 3 d. A fully automated FLYA structure calculation can be completed in 1 d on a powerful computer system.
Zettl, Thomas; Mathew, Rebecca S.; Seifert, Sönke; ...
2016-05-31
Accurate determination of molecular distances is fundamental to understanding the structure, dynamics, and conformational ensembles of biological macromolecules. Here we present a method to determine the full,distance,distribution between small (~7 Å) gold labels attached to macromolecules with very high-precision(≤1 Å) and on an absolute distance scale. Our method uses anomalous small-angle X-ray scattering close to a gold absorption edge to separate the gold-gold interference pattern from other scattering contributions. Results for 10-30 bp DNA constructs achieve excellent signal-to-noise and are in good agreement with previous results obtained by single-energy,SAXS measurements without requiring the preparation and measurement of single labeled andmore » unlabeled samples. Finally, the use of small gold labels in combination with ASAXS read out provides an attractive approach to determining molecular distance distributions that will be applicable to a broad range of macromolecular systems.« less
Suicide Note Sentiment Classification: A Supervised Approach Augmented by Web Data
Xu, Yan; Wang, Yue; Liu, Jiahua; Tu, Zhuowen; Sun, Jian-Tao; Tsujii, Junichi; Chang, Eric
2012-01-01
Objective: To create a sentiment classification system for the Fifth i2b2/VA Challenge Track 2, which can identify thirteen subjective categories and two objective categories. Design: We developed a hybrid system using Support Vector Machine (SVM) classifiers with augmented training data from the Internet. Our system consists of three types of classification-based systems: the first system uses spanning n-gram features for subjective categories, the second one uses bag-of-n-gram features for objective categories, and the third one uses pattern matching for infrequent or subtle emotion categories. The spanning n-gram features are selected by a feature selection algorithm that leverages emotional corpus from weblogs. Special normalization of objective sentences is generalized with shallow parsing and external web knowledge. We utilize three sources of web data: the weblog of LiveJournal which helps to improve the feature selection, the eBay List which assists in special normalization of information and instructions categories, and the suicide project web which provides unlabeled data with similar properties as suicide notes. Measurements: The performance is evaluated by the overall micro-averaged precision, recall and F-measure. Result: Our system achieved an overall micro-averaged F-measure of 0.59. Happiness_peacefulness had the highest F-measure of 0.81. We were ranked as the second best out of 26 competing teams. Conclusion: Our results indicated that classifying fine-grained sentiments at sentence level is a non-trivial task. It is effective to divide categories into different groups according to their semantic properties. In addition, our system performance benefits from external knowledge extracted from publically available web data of other purposes; performance can be further enhanced when more training data is available. PMID:22879758
Cocos, Anne; Fiks, Alexander G; Masino, Aaron J
2017-07-01
Social media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media. We developed a recurrent neural network (RNN) model that labels words in an input sequence with ADR membership tags. The only input features are word-embedding vectors, which can be formed through task-independent pretraining or during ADR detection training. Our best-performing RNN model used pretrained word embeddings created from a large, non-domain-specific Twitter dataset. It achieved an approximate match F-measure of 0.755 for ADR identification on the dataset, compared to 0.631 for a baseline lexicon system and 0.65 for the state-of-the-art conditional random field model. Feature analysis indicated that semantic information in pretrained word embeddings boosted sensitivity and, combined with contextual awareness captured in the RNN, precision. Our model required no task-specific feature engineering, suggesting generalizability to additional sequence-labeling tasks. Learning curve analysis showed that our model reached optimal performance with fewer training examples than the other models. ADR detection performance in social media is significantly improved by using a contextually aware model and word embeddings formed from large, unlabeled datasets. The approach reduces manual data-labeling requirements and is scalable to large social media datasets. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Knowledge based word-concept model estimation and refinement for biomedical text mining.
Jimeno Yepes, Antonio; Berlanga, Rafael
2015-02-01
Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier
2017-07-15
In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.
Effects of Picture Labeling on Science Text Processing and Learning: Evidence from Eye Movements
ERIC Educational Resources Information Center
Mason, Lucia; Pluchino, Patrik; Tornatora, Maria Caterina
2013-01-01
This study investigated the effects of reading a science text illustrated by either a labeled or unlabeled picture. Both the online process of reading the text and the offline conceptual learning from the text were examined. Eye-tracking methodology was used to trace text and picture processing through indexes of first- and second-pass reading or…
2011-01-01
Background Elucidating the genetic basis of human diseases is a central goal of genetics and molecular biology. While traditional linkage analysis and modern high-throughput techniques often provide long lists of tens or hundreds of disease gene candidates, the identification of disease genes among the candidates remains time-consuming and expensive. Efficient computational methods are therefore needed to prioritize genes within the list of candidates, by exploiting the wealth of information available about the genes in various databases. Results We propose ProDiGe, a novel algorithm for Prioritization of Disease Genes. ProDiGe implements a novel machine learning strategy based on learning from positive and unlabeled examples, which allows to integrate various sources of information about the genes, to share information about known disease genes across diseases, and to perform genome-wide searches for new disease genes. Experiments on real data show that ProDiGe outperforms state-of-the-art methods for the prioritization of genes in human diseases. Conclusions ProDiGe implements a new machine learning paradigm for gene prioritization, which could help the identification of new disease genes. It is freely available at http://cbio.ensmp.fr/prodige. PMID:21977986
Relative roles of synthesis and degradation in regulating metallothionein accretion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurin, D.E.
1989-01-01
Decay kinetics of {sup 35}S-cysteine (cys) in metallothionein (MT) were used to simultaneously measure rates of MT synthesis and degradation in a HD11 chicken-macrophage cell-line. A reverse-phase (RP) high-performance liquid-chromatography procedure was used to purify 2 MT-isoforms from cytosols with approximately 94% purity. The medium that the macrophages were incubated in was validated to ensure that it contained enough unlabeled cys to adequately chase {sup 35}S-cys released by the degradation of labeled protein. The addition of Zn{sup 2+} and unlabeled cys to the medium did not change the fractional-rates of MT synthesis (FRS) and degradation (FRD). These measurements were alsomore » validated by showing that the measured fractional-rate of MT accretion closely approximated the difference between FRS and FRD. When macrophages were incubated in medium supplemented with 50 or 25 {mu}M Zn{sup 2+} the absolute-rate of MT synthesis (ARS) and FRD increased and decreases, respectively. When macrophages were incubated in medium supplemented with 20 or 10 {mu}M Cd{sup 2+}, the ARS increased but the FDR was not changed.« less
Vibrational Mode Assignment of α-Pinene by Isotope Editing: One Down, Seventy-One To Go
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upshur, Mary Alice; Chase, Hilary M.; Strick, Benjamin F.
This study aims to reliably assign the vibrational sum frequency generation (SFG) spectrum of α-pinene at the vapor/solid interface using a method involving deuteration of various methyl groups. The synthesis of five different deuterated isotopologues of α-pinene is presented in order to determine the impact that removing contributions from methyl group C$-$H oscillators has on its SFG response. 0.6 cm -1 Resolution SFG spectra of these isotopologues show varying degrees of differences in the C–H stretching region when compared to the SFG response of unlabeled α-pinene. The largest spectral changes were observed for the isotopologue containing a fully deuterated vinylmore » methyl group. Noticeable losses in signal intensities allow us to reliably assign the 2860 cm -1 peak to the vinyl methyl symmetric stretch. Furthermore, upon removing the vinyl methyl group entirely by synthesizing apopinene, the steric influence of the unlabeled C 9H 14 fragment on the SFG response of α-pinene SFG can be readily observed. The work presented here brings us one step closer to understanding the vibrational spectroscopy of α-pinene.« less
Deacetylation of forskolin catalyzed by bovine brain membranes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selfe, S.; Storm, D.R.
1985-11-27
Radiolabeled forskolin, 7-(/sup 3/H-acetyl)-forskolin, was synthesized to explore interactions between forskolin and bovine brain membrane preparations. The radiolabeled derivative was chemically characterized, and found to be indistinquishable from unlabeled forskolin in its ability to stimulate bovine brain adenylate cyclase. Preliminary binding data demonstrated that binding of 7-(/sup 3/H-acetyl)-forskolin to membranes was concentration dependent. However, competition binding studies using a constant concentration of 7-(/sup 3/H-acetyl)-forskolin with increasing levels of unlabeled forskolin showed enhanced binding of the labeled derivative. This suggested that 7-(/sup 3/H-acetyl)-forskolin was degraded by membranes and protected by native forskolin. Incubation of forskolin with membranes and analysis of themore » products by thin layer chromatography and mass spectroscopy showed the formation of 7-desacetylforskolin. The deacetylation of forskolin was monitored by quantitating the release of (/sup 3/H)acetate from 7-(/sup 3/H-acetyl)-forskolin. The reaction was linear with time and protein concentration. These data illustrate that forskolin can be degraded by membranes and indicate that ligand binding studies using labeled forskolin and membrane preparations should be cautiously interpreted.« less
Balayadi, M; Jule, Y; Cupo, A
1988-10-05
The occurrence and distribution of methionine-enkephalin (ME), leucine-enkephalin (LE) and methionine-enkephalin-Arg6-Gly7-Leu8 (MERGL)-like (LI) immunoreactive material in the inferior mesenteric ganglion (IMG) of the cat were studied by immunohistochemical techniques using the peroxidase-antiperoxidase method. Numerous ME-Li, LE-Li and MERGL-Li immunoreactive fibres with the same distribution pattern were observed. They were varicose and often surrounded closely neighbouring unlabelled ganglion cell bodies. Sometimes they ran in strands between ganglion cells. ME-Li immunoreactive material was detected in a number of cell bodies, the diameter of which was similar to that of unlabelled principal ganglion cell bodies, and which were probably Enk-Li-containing principal ganglion cells. These immunoreactive cells were often surrounded by ME-Li immunoreactive fibres. No LE-Li or MERGL-Li immunoreactive ganglion cell bodies were observed. The presence of ME-Li immunoreactive principal ganglion cells raises the possibility that the Enk-Li immunoreactive fibres present in the IMG may have a prevertebral ganglionic source. The possibility that the Enk-Li material present in nerve fibres might be derived from preproenkephalin-A was suggested by the occurrence of MERGL-Li immunoreactivity.
Candida albicans Adheres to Chitin by Recognizing N-acetylglucosamine (GlcNAc).
Ishijima, Sanae A; Yamada, Tsuyoshi; Maruyama, Naho; Abe, Shigeru
2017-01-01
The binding of Candida albicans cells to chitin was examined in a cell-binding assay. Microscopic observations indicated that both living and heat-killed Candida cells bound to chitin-coated substrates. C. albicans preferentially bound to chitin-coated plastic plates over chitosan-coated and uncoated plates. We prepared 125 I-labeled Candida cells for quantitative analysis of their binding to chitin. Heat-killed 125 I-labeled Candida cells bound to chitin-coated plates in a time-dependent manner until 1.5 hours after start of incubation at 4℃. The binding of 125 I-labeled Candida cells to chitin-coated plates was inhibited by adding unlabeled living or unlabeled heat-killed Candida cells. The binding of Candida to chitin was also reduced by addition of 25 mg/ml chitin or chitosan up to 10%. N-acetylglucosamine (GlcNAc), which is a constituent of chitin, inhibited binding of Candida to chitin in a dose-dependent manner between 12.5 and 200 mM. Glucosamine, which is a constituent of chitosan, showed no such inhibitory effect. These findings suggest that the binding of Candida to chitin may be mediated by recognition of GlcNAc.
Apigenin and quercetin promote. Delta. pH-dependent accumulation of IAA in membrane vesicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woolard, D.D.; Clark, K.A.
1990-05-01
Flavonoids may act as regulators of polar auxin transport. In the presence of a pH gradient (pH 8{sub in}/6{sub out}) the flavonoids quercetin and apigenin, as well as the synthetic herbicide napthylphthalamic acid (NPA), promote the accumulation of IAA in membrane vesicles from dark-grown zucchini hypocotyls. Simultaneous accumulation of {sup 3}H-IAA (10 nM) and {sup 14}C-butyric acid (5 {mu}M; included as a pH probe) was determined by a filtration assay after incubating the vesicles with 3 nM to 100 {mu}M quercetin, apigenin, NPA or unlabeled IAA. Maximal stimulation (% of Control) was observed with 3 {mu}M NPA (130%), 1 {mu}Mmore » quercetin (120%), or 3 {mu}M apigenin (115%); {Delta}pH was not affected by these concentrations. As reported by others, IAA uptake was saturable: 1 {mu}M unlabeled IAA eliminated {Delta}pH-dependent uptake of {sup 3}H-IAA without altering {Delta}pH. However, at 30 to 100 {mu}M, every compound tested collapsed the imposed pH gradient and therefore abolished specific {sup 3}H-IAA uptake.« less
Metabolic efficiency and turnover of soil microbial communities in biodegradation tests.
Shen, J; Bartha, R
1996-01-01
Biodegradability screening tests of soil commonly measure 14CO2 evolution from radiolabeled test compounds, and glucose has often served as a positive control. When constant amounts of radiolabel were added to soil in combination with increasing amounts of unlabeled substrates, glucose and some related hexoses behaved in an anomalous manner. In contrast to that of formate, benzoate, n-hexadecane, or bis(2-ethylhexyl) phthalate, dilution of glucose radiocarbon with unlabeled glucose increased rather than decreased the rate and extent of 14CO2 evolution. [14C]glucose incorporation into biomass and Vmax values were consistent with the interpretation that application of relatively high concentrations of glucose to soil shifts the balance of the soil microbial community from the autochthonous (humus-degrading) to the zymogeneous (opportunistic) segment. The higher growth and turnover rates that define zymogeneous microorganisms, combined with a lower level of carbon incorporation into their biomass, result in the evolution of disproportionate percentages of 14CO2. When used as positive controls, glucose and related hexoses may raise the expectations for percent 14CO2 evolution to levels that are not realistic for other biodegradable compounds. PMID:8779580
Isotope-labeling studies on the formation pathway of acrolein during heat processing of oils.
Ewert, Alice; Granvogl, Michael; Schieberle, Peter
2014-08-20
Acrolein (2-propenal) is classified as a foodborne toxicant and was shown to be present in significant amounts in heated edible oils. Up to now, its formation was mainly suggested to be from the glycerol part of triacylglycerides, although a clear influence of the unsaturation of the fatty acid moiety was also obvious in previous studies. To unequivocally clarify the role of the glycerol and the fatty acid parts in acrolein formation, two series of labeled triacylglycerides were synthesized: [(13)C(3)]-triacylglycerides of stearic, oleic, linoleic, and linolenic acid and [(13)C(54)]-triacylglycerides with labeled stearic, oleic, and linoleic acid, but with unlabeled glycerol. Heating of each of the seven intermediates singly in silicon oil and measurement of the formed amounts of labeled and unlabeled acrolein clearly proved the fatty acid backbone as the key precursor structure. Enzymatically synthesized pure linoleic acid and linolenic acid hydroperoxides were shown to be the key intermediates in acrolein formation, thus allowing the discussion of a radical-induced reaction pathway leading to the formation of the aldehyde. Surprisingly, although several oils contained high amounts of acrolein after heating, deep-fried foods themselves, such as donuts or French fries, were low in the aldehyde.
Lu, Shen; Xia, Yong; Cai, Tom Weidong; Feng, David Dagan
2015-01-01
Dementia, Alzheimer's disease (AD) in particular is a global problem and big threat to the aging population. An image based computer-aided dementia diagnosis method is needed to providing doctors help during medical image examination. Many machine learning based dementia classification methods using medical imaging have been proposed and most of them achieve accurate results. However, most of these methods make use of supervised learning requiring fully labeled image dataset, which usually is not practical in real clinical environment. Using large amount of unlabeled images can improve the dementia classification performance. In this study we propose a new semi-supervised dementia classification method based on random manifold learning with affinity regularization. Three groups of spatial features are extracted from positron emission tomography (PET) images to construct an unsupervised random forest which is then used to regularize the manifold learning objective function. The proposed method, stat-of-the-art Laplacian support vector machine (LapSVM) and supervised SVM are applied to classify AD and normal controls (NC). The experiment results show that learning with unlabeled images indeed improves the classification performance. And our method outperforms LapSVM on the same dataset.
Zhou, Mu; Zhang, Qiao; Xu, Kunjie; Tian, Zengshan; Wang, Yanmeng; He, Wei
2015-01-01
Due to the wide deployment of wireless local area networks (WLAN), received signal strength (RSS)-based indoor WLAN localization has attracted considerable attention in both academia and industry. In this paper, we propose a novel page rank-based indoor mapping and localization (PRIMAL) by using the gene-sequenced unlabeled WLAN RSS for simultaneous localization and mapping (SLAM). Specifically, first of all, based on the observation of the motion patterns of the people in the target environment, we use the Allen logic to construct the mobility graph to characterize the connectivity among different areas of interest. Second, the concept of gene sequencing is utilized to assemble the sporadically-collected RSS sequences into a signal graph based on the transition relations among different RSS sequences. Third, we apply the graph drawing approach to exhibit both the mobility graph and signal graph in a more readable manner. Finally, the page rank (PR) algorithm is proposed to construct the mapping from the signal graph into the mobility graph. The experimental results show that the proposed approach achieves satisfactory localization accuracy and meanwhile avoids the intensive time and labor cost involved in the conventional location fingerprinting-based indoor WLAN localization. PMID:26404274
Anatomical entity mention recognition at literature scale
Pyysalo, Sampo; Ananiadou, Sophia
2014-01-01
Motivation: Anatomical entities ranging from subcellular structures to organ systems are central to biomedical science, and mentions of these entities are essential to understanding the scientific literature. Despite extensive efforts to automatically analyze various aspects of biomedical text, there have been only few studies focusing on anatomical entities, and no dedicated methods for learning to automatically recognize anatomical entity mentions in free-form text have been introduced. Results: We present AnatomyTagger, a machine learning-based system for anatomical entity mention recognition. The system incorporates a broad array of approaches proposed to benefit tagging, including the use of Unified Medical Language System (UMLS)- and Open Biomedical Ontologies (OBO)-based lexical resources, word representations induced from unlabeled text, statistical truecasing and non-local features. We train and evaluate the system on a newly introduced corpus that substantially extends on previously available resources, and apply the resulting tagger to automatically annotate the entire open access scientific domain literature. The resulting analyses have been applied to extend services provided by the Europe PubMed Central literature database. Availability and implementation: All tools and resources introduced in this work are available from http://nactem.ac.uk/anatomytagger. Contact: sophia.ananiadou@manchester.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24162468
Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines
Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram
2014-01-01
When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002
Binning in Gaussian Kernel Regularization
2005-04-01
OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the SVM trained on 27,179 samples, but reduces the...71.40%) on 966 randomly sampled data. Using the OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the...the OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the SVM trained on 27,179 samples, and reduces
ERIC Educational Resources Information Center
Grant, Douglas S.
2006-01-01
Pigeons were trained in a matching task with either color (group color-first) or line (group line-first) samples. After asymmetrical training in which each group was initially trained with the same sample on all trials, marked retention asymmetries were obtained. In both groups, accuracy dropped precipitously on trials involving the initially…
2018-01-01
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512
Chen, Guan-yuan; Chiu, Huai-hsuan; Lin, Shu-wen; Tseng, Yufeng Jane; Tsai, Sung-jeng; Kuo, Ching-hua
2015-01-01
As fatty acids play an important role in biological regulation, the profiling of fatty acid expression has been used to discover various disease markers and to understand disease mechanisms. This study developed an effective and accurate comparative fatty acid analysis method using differential labeling to speed up the metabolic profiling of fatty acids. Fatty acids were derivatized with unlabeled (D0) or deuterated (D3) methanol, followed by GC-MS analysis. The comparative fatty acid analysis method was validated using a series of samples with different ratios of D0/D3-labeled fatty acid standards and with mouse liver extracts. Using a lipopolysaccharide (LPS)-treated mouse model, we found that the fatty acid profiles after LPS treatment were similar between the conventional single-sample analysis approach and the proposed comparative approach, with a Pearson's correlation coefficient of approximately 0.96. We applied the comparative method to investigate voriconazole-induced hepatotoxicity and revealed the toxicity mechanism as well as the potential of using fatty acids as toxicity markers. In conclusion, the comparative fatty acid profiling technique was determined to be fast and accurate and allowed the discovery of potential fatty acid biomarkers in a more economical and efficient manner. Copyright © 2014 Elsevier B.V. All rights reserved.
Burnum-Johnson, Kristin E; Nie, Song; Casey, Cameron P; Monroe, Matthew E; Orton, Daniel J; Ibrahim, Yehia M; Gritsenko, Marina A; Clauss, Therese R W; Shukla, Anil K; Moore, Ronald J; Purvine, Samuel O; Shi, Tujin; Qian, Weijun; Liu, Tao; Baker, Erin S; Smith, Richard D
2016-12-01
Current proteomic approaches include both broad discovery measurements and quantitative targeted analyses. In many cases, discovery measurements are initially used to identify potentially important proteins (e.g. candidate biomarkers) and then targeted studies are employed to quantify a limited number of selected proteins. Both approaches, however, suffer from limitations. Discovery measurements aim to sample the whole proteome but have lower sensitivity, accuracy, and quantitation precision than targeted approaches, whereas targeted measurements are significantly more sensitive but only sample a limited portion of the proteome. Herein, we describe a new approach that performs both discovery and targeted monitoring (DTM) in a single analysis by combining liquid chromatography, ion mobility spectrometry and mass spectrometry (LC-IMS-MS). In DTM, heavy labeled target peptides are spiked into tryptic digests and both the labeled and unlabeled peptides are detected using LC-IMS-MS instrumentation. Compared with the broad LC-MS discovery measurements, DTM yields greater peptide/protein coverage and detects lower abundance species. DTM also achieved detection limits similar to selected reaction monitoring (SRM) indicating its potential for combined high quality discovery and targeted analyses, which is a significant step toward the convergence of discovery and targeted approaches. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Loch, Alexandre Andrade; Hengartner, Michael Pascal; Guarniero, Francisco Bevilacqua; Lawson, Fabio Lorea; Wang, Yuan-Pang; Gattaz, Wagner Farid; Rössler, Wulf
2013-02-28
Findings on stigmatizing attitudes toward individuals with schizophrenia have been inconsistent in comparisons between mental health professionals and members of the general public. In this regard, it is important to obtain data from understudied sociocultural settings, and to examine how attitudes toward mental illness vary in such settings. Nationwide samples of 1015 general population individuals and 1414 psychiatrists from Brazil were recruited between 2009 and 2010. Respondents from the general population were asked to identify an unlabeled schizophrenia case vignette. Psychiatrists were instructed to consider "someone with stabilized schizophrenia". Stereotypes, perceived prejudice and social distance were assessed. For the general population, stigma determinants replicated findings from the literature. The level of the vignette's identification constituted an important correlate. For psychiatrists, determinants correlated in the opposite direction. When both samples were compared, psychiatrists showed the highest scores in stereotypes and perceived prejudice; for the general population, the better they recognized the vignette, the higher they scored in those dimensions. Psychiatrists reported the lowest social distance scores compared with members of the general population. Knowledge about schizophrenia thus constituted an important determinant of stigma; consequently, factors influencing stigma should be further investigated in the general population and in psychiatrists as well. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Comparing Deaf and Hearing Dutch Infants: Changes in the Vowel Space in the First 2 Years
ERIC Educational Resources Information Center
van der Stelt, Jeannette M.; Wempe, Ton G.; Pols, Louis C. W.
2008-01-01
The influence of the mother tongue on vowel productions in infancy is different for deaf and hearing babies. Audio material of five hearing and five deaf infants acquiring Dutch was collected monthly from month 5-18, and at 24 months. Fifty unlabelled utterances were digitized for each recording. This study focused on developmental paths in vowel…
Evaluation of FODMAP Carbohydrates Content in Selected Foods in the United States.
Chumpitazi, Bruno P; Lim, Jongbin; McMeans, Ann R; Shulman, Robert J; Hamaker, Bruce R
2018-04-26
We analyzed the fermentable oligosaccharide, disaccharide, monosaccharide, and polyols (FODMAP) content of several foods potentially low in FODMAP which are commonly consumed by children. We determined that several processed foods (eg, gluten-free baked products) had unlabeled FODMAP content. Determining FODMAP content within foods distributed in the US may support educational and dietary interventions. Copyright © 2018 Elsevier Inc. All rights reserved.
Young Adults Do Not Think World Knowledge Is Vital
ERIC Educational Resources Information Center
Manzo, Kathleen Kennedy
2006-01-01
A new survey has found that most young adults in the United States have difficulty identifying Iraq on an unlabeled map of the Middle East, or are unaware that the population of China is more than four times that of the United States. This lack of geographic literacy goes beyond simple gaps in knowledge and skills for most of these people do not…
Using Deep UV Raman Spectroscopy to Identify In Situ Microbial Activity
NASA Astrophysics Data System (ADS)
Sapers, H. M.; Wanger, G.; Amend, J.; Orphan, V. J.; Bhartia, R.
2017-12-01
Microbial communities living in close association with lithic substrates play a critical role in biogeochemical cycles. Understanding the interactions between microorganisms and their abiotic substrates requires knowledge of microbial activity. Identifying active cells adhered to complex environmental substrates, especially in low biomass systems, remains a challenge. Stable isotope probing (SIP) provides a means to trace microbial activity in environmental systems. Active members of the community take up labeled substrates and incorporate the labels into biomolecules that can be detected through downstream analyses. Here we show for the first time that Deep UV (248 nm) Raman spectroscopy can differentiate microbial cells labeled with stable isotopes. Previous studies have used Raman spectroscopy with a 532 nm source to identify active bacterial cells by measuring a Raman shift between peaks corresponding to amino acids incorporating 13C compared to controls. However, excitation at 532 nm precludes detection on complex substrates due to high autofluorescence of native minerals. Excitation in the DUV range offers non-destructive imaging on mineral surfaces - retaining critical contextual information. We prepared cultures of E. coli grown in 50 atom% 13C glucose spotted onto Al wafers to test the ability of DUV Raman spectroscopy to differentiate labeled and unlabeled cells. For the first time, we are able to demonstrate a distinct and repeatable shift between cells grown in labeled media and unlabeled media when imaged on Al wafers with DUV Raman spectroscopy. The Raman spectra are dominated by the characteristic Raman bands of guanine. The dominant marker peak for guanine attributed to N7-C8 and C8-N9 ring stretching and C8-H in-plane bending, is visible at 1480 cm-1 in the unlabeled cells and is blue-shifted by 20 wavenumbers to 1461 cm-1 in the labeled cells. The ability of DUV Raman to effectively identify regions containing cells that have incorporated isotopic labels will allow in situ detection of metabolically-targeted active community members on complex natural substrates providing a crucial link between microbial activity and environmental context.
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.
The development of radioactive sample surrogates for training and exercises
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martha Finck; Bevin Brush; Dick Jansen
2012-03-01
The development of radioactive sample surrogates for training and exercises Source term information is required for to reconstruct a device used in a dispersed radiological dispersal device. Simulating a radioactive environment to train and exercise sampling and sample characterization methods with suitable sample materials is a continued challenge. The Idaho National Laboratory has developed and permitted a Radioactive Response Training Range (RRTR), an 800 acre test range that is approved for open air dispersal of activated KBr, for training first responders in the entry and exit from radioactively contaminated areas, and testing protocols for environmental sampling and field characterization. Membersmore » from the Department of Defense, Law Enforcement, and the Department of Energy participated in the first contamination exercise that was conducted at the RRTR in the July 2011. The range was contaminated using a short lived radioactive Br-82 isotope (activated KBr). Soil samples contaminated with KBr (dispersed as a solution) and glass particles containing activated potassium bromide that emulated dispersed radioactive materials (such as ceramic-based sealed source materials) were collected to assess environmental sampling and characterization techniques. This presentation summarizes the performance of a radioactive materials surrogate for use as a training aide for nuclear forensics.« less
2014-01-01
Background Cancer detection using sniffer dogs is a potential technology for clinical use and research. Our study sought to determine whether dogs could be trained to discriminate the odour of urine from men with prostate cancer from controls, using rigorous testing procedures and well-defined samples from a major research hospital. Methods We attempted to train ten dogs by initially rewarding them for finding and indicating individual prostate cancer urine samples (Stage 1). If dogs were successful in Stage 1, we then attempted to train them to discriminate prostate cancer samples from controls (Stage 2). The number of samples used to train each dog varied depending on their individual progress. Overall, 50 unique prostate cancer and 67 controls were collected and used during training. Dogs that passed Stage 2 were tested for their ability to discriminate 15 (Test 1) or 16 (Tests 2 and 3) unfamiliar prostate cancer samples from 45 (Test 1) or 48 (Tests 2 and 3) unfamiliar controls under double-blind conditions. Results Three dogs reached training Stage 2 and two of these learnt to discriminate potentially familiar prostate cancer samples from controls. However, during double-blind tests using new samples the two dogs did not indicate prostate cancer samples more frequently than expected by chance (Dog A sensitivity 0.13, specificity 0.71, Dog B sensitivity 0.25, specificity 0.75). The other dogs did not progress past Stage 1 as they did not have optimal temperaments for the sensitive odour discrimination training. Conclusions Although two dogs appeared to have learnt to select prostate cancer samples during training, they did not generalise on a prostate cancer odour during robust double-blind tests involving new samples. Our study illustrates that these rigorous tests are vital to avoid drawing misleading conclusions about the abilities of dogs to indicate certain odours. Dogs may memorise the individual odours of large numbers of training samples rather than generalise on a common odour. The results do not exclude the possibility that dogs could be trained to detect prostate cancer. We recommend that canine olfactory memory is carefully considered in all future studies and rigorous double-blind methods used to avoid confounding effects. PMID:24575737
NASA Astrophysics Data System (ADS)
Swan, B.; Laverdiere, M.; Yang, L.
2017-12-01
In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process and in sample creation.
Reduction in training time of a deep learning model in detection of lesions in CT
NASA Astrophysics Data System (ADS)
Makkinejad, Nazanin; Tajbakhsh, Nima; Zarshenas, Amin; Khokhar, Ashfaq; Suzuki, Kenji
2018-02-01
Deep learning (DL) emerged as a powerful tool for object detection and classification in medical images. Building a well-performing DL model, however, requires a huge number of images for training, and it takes days to train a DL model even on a cutting edge high-performance computing platform. This study is aimed at developing a method for selecting a "small" number of representative samples from a large collection of training samples to train a DL model for the could be used to detect polyps in CT colonography (CTC), without compromising the classification performance. Our proposed method for representative sample selection (RSS) consists of a K-means clustering algorithm. For the performance evaluation, we applied the proposed method to select samples for the training of a massive training artificial neural network based DL model, to be used for the classification of polyps and non-polyps in CTC. Our results show that the proposed method reduce the training time by a factor of 15, while maintaining the classification performance equivalent to the model trained using the full training set. We compare the performance using area under the receiveroperating- characteristic curve (AUC).
Sampling Methods and the Accredited Population in Athletic Training Education Research
ERIC Educational Resources Information Center
Carr, W. David; Volberding, Jennifer
2009-01-01
Context: We describe methods of sampling the widely-studied, yet poorly defined, population of accredited athletic training education programs (ATEPs). Objective: There are two purposes to this study; first to describe the incidence and types of sampling methods used in athletic training education research, and second to clearly define the…
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
DOE Office of Scientific and Technical Information (OSTI.GOV)
Symons, Christopher T; Arel, Itamar
2011-01-01
Budgeted learning under constraints on both the amount of labeled information and the availability of features at test time pertains to a large number of real world problems. Ideas from multi-view learning, semi-supervised learning, and even active learning have applicability, but a common framework whose assumptions fit these problem spaces is non-trivial to construct. We leverage ideas from these fields based on graph regularizers to construct a robust framework for learning from labeled and unlabeled samples in multiple views that are non-independent and include features that are inaccessible at the time the model would need to be applied. We describemore » examples of applications that fit this scenario, and we provide experimental results to demonstrate the effectiveness of knowledge carryover from training-only views. As learning algorithms are applied to more complex applications, relevant information can be found in a wider variety of forms, and the relationships between these information sources are often quite complex. The assumptions that underlie most learning algorithms do not readily or realistically permit the incorporation of many of the data sources that are available, despite an implicit understanding that useful information exists in these sources. When multiple information sources are available, they are often partially redundant, highly interdependent, and contain noise as well as other information that is irrelevant to the problem under study. In this paper, we are focused on a framework whose assumptions match this reality, as well as the reality that labeled information is usually sparse. Most significantly, we are interested in a framework that can also leverage information in scenarios where many features that would be useful for learning a model are not available when the resulting model will be applied. As with constraints on labels, there are many practical limitations on the acquisition of potentially useful features. A key difference in the case of feature acquisition is that the same constraints often don't pertain to the training samples. This difference provides an opportunity to allow features that are impractical in an applied setting to nevertheless add value during the model-building process. Unfortunately, there are few machine learning frameworks built on assumptions that allow effective utilization of features that are only available at training time. In this paper we formulate a knowledge carryover framework for the budgeted learning scenario with constraints on features and labels. The approach is based on multi-view and semi-supervised learning methods that use graph-encoded regularization. Our main contributions are the following: (1) we propose and provide justification for a methodology for ensuring that changes in the graph regularizer using alternate views are performed in a manner that is target-concept specific, allowing value to be obtained from noisy views; and (2) we demonstrate how this general set-up can be used to effectively improve models by leveraging features unavailable at test time. The rest of the paper is structured as follows. In Section 2, we outline real-world problems to motivate the approach and describe relevant prior work. Section 3 describes the graph construction process and the learning methodologies that are employed. Section 4 provides preliminary discussion regarding theoretical motivation for the method. In Section 5, effectiveness of the approach is demonstrated in a series of experiments employing modified versions of two well-known semi-supervised learning algorithms. Section 6 concludes the paper.« less
Sample Selection for Training Cascade Detectors.
Vállez, Noelia; Deniz, Oscar; Bueno, Gloria
2015-01-01
Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.
Wang, Rong
2015-01-01
In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
2009-08-01
tubular mode driven by electroosmotic flow and the inherent electrophoretic mobility of the analytes under the influence of an applied electric field...could be due to unlabeled beads. Figure 3 (C and D) also shows electropherogram of a neutral electroosmotic flow (EOF) marker dye BODIPY and...internal turbulent mixing . The current microfabricated electromagnets cannot produce sufficient fields to trap the NPs against a large flow forces
Grief and Group Recovery Following a Military Air Disaster
1990-01-01
stages of human development , the unlabelled cells are meant to suggest individuals who either accelerate through phases more quick- ly than the norm...and far-reaching ( Erikson , 1976; Titchener and Kapp, 1976). Such was apparently the case following the December 1988 crash of Pan Am flight 103 in...the normal processes of group recovery and reintegration after sudden, traumatic loss. The present study takes a necessary step in developing this
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golfinopoulos, A.; Soupioni, M.; Kanellaki, M.
The effect of initial lactose concentration on lactose uptake rate by kefir free cells, during the lactose fermentation, was studied in this work. For the investigation {sup 14}C-labelled lactose was used due to the fact that labeled and unlabeled molecules are fermented in the same way. The results illustrated lactose uptake rates are about up to two fold higher at lower initial (convolution sign)Be densities as compared with higher initial (convolution sign)Be densities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tilden, A.B.; Cauda, R.; Grossi, C.E.
1986-06-01
Infection with varicella-zoster virus (VZV) rendered RAJI cells more susceptible to lysis by non-adherent blood lymphocytes. At an effector to target ratio of 80:1 the mean percentage of /sup 51/Cr release of VZV-infected RAJI cells was 41 +/- 12%, whereas that of uninfected RAJI cells was 15 +/- 6%. The increased susceptibility to lysis was associated with increased effector to target conjugate formation in immunofluorescence binding assays. The effector cells cytotoxic for VZV-infected RAJI cells were predominantly Leu-11a/sup +/ Leu-4/sup -/ granular lymphocytes as demonstrated by fluorescence-activated cell sorting. The effector cell active against VZV-infected RAJI cells appeared similar tomore » those active against herpes simplex virus (HSV)-infected cells, because in cold target competition experiments the lysis of /sup 51/Cr-labeled VZV-infected RAJI cells was efficiently inhibited by either unlabeled VZV-infected RAJI cells (mean 71% inhibition, 2:1 ratio unlabeled to labeled target) or HSV-infected RAJI cells (mean 69% inhibition) but not by uninfected RAJI cells (mean 10% inhibition). In contrast, competition experiments revealed donor heterogeneity in the overlap between effector cells for VZV- or HSV-infected RAJI vs K-562 cells.« less
smiFISH and FISH-quant - a flexible single RNA detection approach with super-resolution capability.
Tsanov, Nikolay; Samacoits, Aubin; Chouaib, Racha; Traboulsi, Abdel-Meneem; Gostan, Thierry; Weber, Christian; Zimmer, Christophe; Zibara, Kazem; Walter, Thomas; Peter, Marion; Bertrand, Edouard; Mueller, Florian
2016-12-15
Single molecule FISH (smFISH) allows studying transcription and RNA localization by imaging individual mRNAs in single cells. We present smiFISH (single molecule inexpensive FISH), an easy to use and flexible RNA visualization and quantification approach that uses unlabelled primary probes and a fluorescently labelled secondary detector oligonucleotide. The gene-specific probes are unlabelled and can therefore be synthesized at low cost, thus allowing to use more probes per mRNA resulting in a substantial increase in detection efficiency. smiFISH is also flexible since differently labelled secondary detector probes can be used with the same primary probes. We demonstrate that this flexibility allows multicolor labelling without the need to synthesize new probe sets. We further demonstrate that the use of a specific acrydite detector oligonucleotide allows smiFISH to be combined with expansion microscopy, enabling the resolution of transcripts in 3D below the diffraction limit on a standard microscope. Lastly, we provide improved, fully automated software tools from probe-design to quantitative analysis of smFISH images. In short, we provide a complete workflow to obtain automatically counts of individual RNA molecules in single cells. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Adaptive Batch Mode Active Learning.
Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman
2015-08-01
Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.
Influence of E. coli endotoxin on ACTH induced adrenal cell steroidogenesis.
Garcia, R; Viloria, M D; Municio, A M
1985-03-01
The effect of endotoxin (lipopolysaccharide from E. coli) on isolated adrenocortical cells was examined. Lipopolysaccharide decreased the ACTH-induced steroidogenesis. This effect was shown by all corticotropin concentrations studied, and the longer the incubation time, the higher the effect produced. The rate of decrease of ACTH-induced steroidogenesis was dependent on the concentration of lipopolysaccharide in the medium. Binding of [125I]ACTH to adrenocortical cells was modified by lipopolysaccharide; this modification was related to a decrease of the ACTH-induced steroidogenesis. This effect supports the hypothesis of a direct interaction between lipopolysaccharide and the cell membrane with a concomitant distortion of the cell surface affecting the ACTH receptor sites of their environment. [14C]Lipopolysaccharide binds to isolated adrenocortical cells. Binding specificity was investigated by competitive experiments in the presence of various types of endotoxins, polypeptide hormones and proteins. Unlabelled lipopolysaccharide from the same bacterial strain and isolated under identical conditions than the labelled lipopolysaccharide exerted the strongest inhibitory activity. Unlabelled lipopolysaccharide of various strains different from that originating the labelled lipopolysaccharide exerted the less displacement. It would imply a certain kind of specificity but the decrease in the binding of lipopolysaccharide produced by ACTH and glucagon suggests the existence of non-specific interactions between lipopolysaccharide and cell membrane.
Autoradiographic localization of specific (/sup 3/H)dexamethasone binding in fetal lung
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, D.G.; Butley, M.S.; Cunha, G.R.
1984-10-01
The cellular and subcellular localization of specific (/sup 3/H)dexamethasone binding was examined in fetal mouse lung at various stages of development and in human fetal lung at 8 weeks of gestation using a rapid in vitro steroid incubation technique followed by thaw-mount autoradiography. Competition studies with unlabeled steroids demonstrate the specificity of (/sup 3/H)dexamethasone labeling, and indicate that fetal lung mesenchyme is a primary glucocorticoid target during lung development. Autoradiographs of (/sup 3/H)dexamethasone binding in lung tissue at early stages of development demonstrate that the mesenchyme directly adjacent to the more proximal portions of the bronchiolar network is heavily labeled.more » In contrast, the epithelium which will later differentiate into bronchi and bronchioles, is relatively unlabeled. Distal portions of the growing epithelium, destined to become alveolar ducts and alveoli, do show nuclear localization of (/sup 3/H)dexamethasone. In addition, by utilizing a technique which allows the simultaneous examination of extracellular matrix components and (/sup 3/H)dexamethasone binding, a relationship is observed between extensive mesenchymal (/sup 3/H)dexamethasone binding and extensive extracellular matrix accumulation. Since glucocorticoids stimulate the synthesis of many extracellular matrix components, these results suggest a role for these hormones in affecting mesenchymal-epithelial interactions during lung morphogenesis.« less
Selective Individual Primary Cell Capture Using Locally Bio-Functionalized Micropores
Liu, Jie; Bombera, Radoslaw; Leroy, Loïc; Roupioz, Yoann; Baganizi, Dieudonné R.; Marche, Patrice N.; Haguet, Vincent; Mailley, Pascal; Livache, Thierry
2013-01-01
Background Solid-state micropores have been widely employed for 6 decades to recognize and size flowing unlabeled cells. However, the resistive-pulse technique presents limitations when the cells to be differentiated have overlapping dimension ranges such as B and T lymphocytes. An alternative approach would be to specifically capture cells by solid-state micropores. Here, the inner wall of 15-µm pores made in 10 µm-thick silicon membranes was covered with antibodies specific to cell surface proteins of B or T lymphocytes. The selective trapping of individual unlabeled cells in a bio-functionalized micropore makes them recognizable just using optical microscopy. Methodology/Principal Findings We locally deposited oligodeoxynucleotide (ODN) and ODN-conjugated antibody probes on the inner wall of the micropores by forming thin films of polypyrrole-ODN copolymers using contactless electro-functionalization. The trapping capabilities of the bio-functionalized micropores were validated using optical microscopy and the resistive-pulse technique by selectively capturing polystyrene microbeads coated with complementary ODN. B or T lymphocytes from a mouse splenocyte suspension were specifically immobilized on micropore walls functionalized with complementary ODN-conjugated antibodies targeting cell surface proteins. Conclusions/Significance The results showed that locally bio-functionalized micropores can isolate target cells from a suspension during their translocation throughout the pore, including among cells of similar dimensions in complex mixtures. PMID:23469221
SP-A binding sites on bovine alveolar macrophages.
Plaga, S; Plattner, H; Schlepper-Schaefer, J
1998-11-25
Surfactant protein A (SP-A) binding to bovine alveolar macrophages was examined in order to characterize SP-A binding proteins on the cell surface and to isolate putative receptors from these cells that could be obtained in large amounts. Human SP-A, unlabeled or labeled with gold particles, was bound to freshly isolated macrophages and analyzed with ELISA or the transmission electron microscope. Binding of SP-A was inhibited by Ca2+ chelation, by an excess of unlabeled SP-A, or by the presence of 20 mg/ml mannan. We conclude that bovine alveolar macrophages expose binding sites for SP-A that are specific and that depend on Ca2+ and on mannose residues. For isolation of SP-A receptors with homologous SP-A as ligand we isolated SP-A from bovine lung lavage. SDS-PAGE analysis of the purified SP-A showed a protein of 32-36 kDa. Functional integrity of the protein was demonstrated. Bovine SP-A bound to Dynabeads was used to isolate SP-A binding proteins. From the fractionated and blotted proteins of the receptor preparation two proteins bound SP-A in a Ca2+-dependent manner, a 40-kDa protein showing mannose dependency and a 210-kDa protein, showing no mannose sensitivity. Copyright 1998 Academic Press.
Paliwoda, Rebecca E; Li, Feng; Reid, Michael S; Lin, Yanwen; Le, X Chris
2014-06-17
Functionalizing nanomaterials for diverse analytical, biomedical, and therapeutic applications requires determination of surface coverage (or density) of DNA on nanomaterials. We describe a sequential strand displacement beacon assay that is able to quantify specific DNA sequences conjugated or coconjugated onto gold nanoparticles (AuNPs). Unlike the conventional fluorescence assay that requires the target DNA to be fluorescently labeled, the sequential strand displacement beacon method is able to quantify multiple unlabeled DNA oligonucleotides using a single (universal) strand displacement beacon. This unique feature is achieved by introducing two short unlabeled DNA probes for each specific DNA sequence and by performing sequential DNA strand displacement reactions. Varying the relative amounts of the specific DNA sequences and spacing DNA sequences during their coconjugation onto AuNPs results in different densities of the specific DNA on AuNP, ranging from 90 to 230 DNA molecules per AuNP. Results obtained from our sequential strand displacement beacon assay are consistent with those obtained from the conventional fluorescence assays. However, labeling of DNA with some fluorescent dyes, e.g., tetramethylrhodamine, alters DNA density on AuNP. The strand displacement strategy overcomes this problem by obviating direct labeling of the target DNA. This method has broad potential to facilitate more efficient design and characterization of novel multifunctional materials for diverse applications.
Compendia and anticancer therapy under Medicare.
Tillman, Katherine; Burton, Brijet; Jacques, Louis B; Phurrough, Steve E
2009-03-03
In 1993, Congress directed the Medicare program to refer to 3 existing published compendia, American Medical Association Drug Evaluations (AMA-DE), United States Pharmacopoeia Drug Information for the Health Professional (USP-DI), and American Hospital Formulary Service Drug Information (AHFS-DI), to identify unlabeled but medically accepted uses of drugs and biologicals in anticancer chemotherapy regimens. Public discussion during the preceding years had centered on whether to designate unlabeled uses of anticancer treatments as experimental and thus outside the scope of Medicare benefits. American Medical Association Drug Evaluations and USP-DI subsequently ceased publication, and the Medicare program faced increasing calls to revise the list of acceptable compendia, as authorized in the statute. In 2007, the Centers for Medicare & Medicaid Services used its regulatory authority to establish a publicly transparent process to revise the list. The Centers for Medicare & Medicaid Services considered 5 requests in 2008 and added National Comprehensive Cancer Network Drugs and Biologics Compendium, DRUGDEX, and Clinical Pharmacology to the list of compendia. DrugPoints was not added, and AMA-DE was removed. Because of the potential for conflicts of interest to lead to biased judgments, the 2008 Medicare Improvements for Patients and Providers Act has a provision that explicitly prohibits inclusion of compendia that do not have a publicly transparent process for evaluating therapies and identifying potential conflicts of interest.
STUDIES ON THE ORIGIN OF RIBOSOMES IN AMOEBA PROTEUS
Craig, Nessly; Goldstein, Lester
1969-01-01
The origin of cytoplasmic RNA and ribosomes was studied in Amoeba proteus by transplantation of a radioactive nucleus into an unlabeled cell followed by examination of the cytoplasm of the recipient for the presence of label. When a RNA-labeled nucleus was used, label appeared in the ribosomes, ribosomal RNA, and soluble RNA. Since the kinetics of appearance of labeled RNA indicates that the nucleus was not injured during the transfer, and since the transferred nuclear pool of labeled acid-soluble RNA precursors is inadequate to account for the amount of cytoplasmic RNA label, it is concluded that cytoplasmic ribosomal RNA is derived from acid-insoluble nuclear RNA and is probably transported as an intact molecule. Likewise, cytoplasmic soluble RNA probably originated in the nucleus, although labeling by terminal exchange in the cytoplasm is also possible. The results were completely different when a protein-labeled nucleus was grafted into an unlabeled host. In this case, label was found only in soluble proteins in the host cell cytoplasm, and there were no (or very few) radioactive ribosomes. This suggests that the nuclear pool of ribosomal protein and ribosomal protein precursors is relatively small and perhaps nonexistent (and, furthermore, shows that there was no cytoplasmic ribosomal contamination of the transferred nucleus). PMID:5765758
A Comparison of Match-to-Sample and Respondent-Type Training of Equivalence Classes
ERIC Educational Resources Information Center
Clayton, Michael C.; Hayes, Linda J.
2004-01-01
Throughout the 25-year history of research on stimulus equivalence, one feature of the training procedure has remained constant, namely, the requirement of operant responding during the training procedures. The present investigation compared the traditional match-to-sample (MTS) training with a more recent respondent-type (ReT) procedure. Another…
NASA Astrophysics Data System (ADS)
Yan, Yue
2018-03-01
A synthetic aperture radar (SAR) automatic target recognition (ATR) method based on the convolutional neural networks (CNN) trained by augmented training samples is proposed. To enhance the robustness of CNN to various extended operating conditions (EOCs), the original training images are used to generate the noisy samples at different signal-to-noise ratios (SNRs), multiresolution representations, and partially occluded images. Then, the generated images together with the original ones are used to train a designed CNN for target recognition. The augmented training samples can contrapuntally improve the robustness of the trained CNN to the covered EOCs, i.e., the noise corruption, resolution variance, and partial occlusion. Moreover, the significantly larger training set effectively enhances the representation capability for other conditions, e.g., the standard operating condition (SOC), as well as the stability of the network. Therefore, better performance can be achieved by the proposed method for SAR ATR. For experimental evaluation, extensive experiments are conducted on the Moving and Stationary Target Acquisition and Recognition dataset under SOC and several typical EOCs.
NASA Astrophysics Data System (ADS)
Murasawa, Go; Yeduru, Srinivasa R.; Kohl, Manfred
2016-12-01
This study investigated macroscopic inhomogeneous deformation occurring in single-crystal Ni-Mn-Ga foils under uniaxial tensile loading. Two types of single-crystal Ni-Mn-Ga foil samples were examined as-received and after thermo-mechanical training. Local strain and the strain field were measured under tensile loading using laser speckle and digital image correlation. The as-received sample showed a strongly inhomogeneous strain field with intermittence under progressive deformation, but the trained sample result showed strain field homogeneity throughout the specimen surface. The as-received sample is a mainly polycrystalline-like state composed of the domain structure. The sample contains many domain boundaries and large domain structures in the body. Its structure would cause large local strain band nucleation with intermittence. However, the trained one is an ideal single-crystalline state with a transformation preferential orientation of variants after almost all domain boundary and large domain structures vanish during thermo-mechanical training. As a result, macroscopic homogeneous deformation occurs on the trained sample surface during deformation.
Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill
2013-01-01
Background The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study Aim To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. Design and setting A three-part longitudinal predictive validity study of selection into training for UK general practice. Method In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Results Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. Conclusion In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered. PMID:24267856
Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill
2013-11-01
The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. A three-part longitudinal predictive validity study of selection into training for UK general practice. In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered.
Truijens, Sophie E M; Banga, Franyke R; Fransen, Annemarie F; Pop, Victor J M; van Runnard Heimel, Pieter J; Oei, S Guid
2015-08-01
This study aimed to explore whether multiprofessional simulation-based obstetric team training improves patient-reported quality of care during pregnancy and childbirth. Multiprofessional teams from a large obstetric collaborative network in the Netherlands were trained in teamwork skills using the principles of crew resource management. Patient-reported quality of care was measured with the validated Pregnancy and Childbirth Questionnaire (PCQ) at 6 weeks postpartum. Before the training, 76 postpartum women (sample I) completed the questionnaire 6 weeks postpartum. Three months after the training, another sample of 68 postpartum women (sample II) completed the questionnaire. In sample II (after the training), the mean (SD) score of 108.9 (10.9) on the PCQ questionnaire was significantly higher than the score of 103.5 (11.6) in sample I (before training) (t = 2.75, P = 0.007). The effect size of the increase in PCQ total score was 0.5. Moreover, the subscales "personal treatment during pregnancy" and "educational information" showed a significant increase after the team training (P < 0.001). Items with the largest increase in mean scores included communication between health care professionals, clear leadership, involvement in planning, and better provision of information. Despite the methodological restrictions of a pilot study, the preliminary results indicate that multiprofessional simulation-based obstetric team training seems to improve patient-reported quality of care. The possibility that this improvement relates to the training is supported by the fact that the items with the largest increase are about the principles of crew resource management, used in the training.
Anomaly detection for machine learning redshifts applied to SDSS galaxies
NASA Astrophysics Data System (ADS)
Hoyle, Ben; Rau, Markus Michael; Paech, Kerstin; Bonnett, Christopher; Seitz, Stella; Weller, Jochen
2015-10-01
We present an analysis of anomaly detection for machine learning redshift estimation. Anomaly detection allows the removal of poor training examples, which can adversely influence redshift estimates. Anomalous training examples may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies with one or more poorly measured photometric quantity. We select 2.5 million `clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730 `anomalous' galaxies with spectroscopic redshift measurements which are flagged as unreliable. We contaminate the clean base galaxy sample with galaxies with unreliable redshifts and attempt to recover the contaminating galaxies using the Elliptical Envelope technique. We then train four machine learning architectures for redshift analysis on both the contaminated sample and on the preprocessed `anomaly-removed' sample and measure redshift statistics on a clean validation sample generated without any preprocessing. We find an improvement on all measured statistics of up to 80 per cent when training on the anomaly removed sample as compared with training on the contaminated sample for each of the machine learning routines explored. We further describe a method to estimate the contamination fraction of a base data sample.
32 CFR Appendix E to Part 110 - Application of 4-Week Summer Field Training Formula (Sample)
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 1 2014-07-01 2014-07-01 false Application of 4-Week Summer Field Training Formula (Sample) E Appendix E to Part 110 National Defense Department of Defense OFFICE OF THE SECRETARY... Appendix E to Part 110—Application of 4-Week Summer Field Training Formula (Sample) Zone I Zone II Total...
32 CFR Appendix E to Part 110 - Application of 4-Week Summer Field Training Formula (Sample)
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 1 2013-07-01 2013-07-01 false Application of 4-Week Summer Field Training Formula (Sample) E Appendix E to Part 110 National Defense Department of Defense OFFICE OF THE SECRETARY... Appendix E to Part 110—Application of 4-Week Summer Field Training Formula (Sample) Zone I Zone II Total...
32 CFR Appendix E to Part 110 - Application of 4-Week Summer Field Training Formula (Sample)
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 1 2012-07-01 2012-07-01 false Application of 4-Week Summer Field Training Formula (Sample) E Appendix E to Part 110 National Defense Department of Defense OFFICE OF THE SECRETARY... Appendix E to Part 110—Application of 4-Week Summer Field Training Formula (Sample) Zone I Zone II Total...
32 CFR Appendix E to Part 110 - Application of 4-Week Summer Field Training Formula (Sample)
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 1 2011-07-01 2011-07-01 false Application of 4-Week Summer Field Training Formula (Sample) E Appendix E to Part 110 National Defense Department of Defense OFFICE OF THE SECRETARY... Appendix E to Part 110—Application of 4-Week Summer Field Training Formula (Sample) Zone I Zone II Total...
32 CFR Appendix E to Part 110 - Application of 4-Week Summer Field Training Formula (Sample)
Code of Federal Regulations, 2010 CFR
2010-07-01
... Formula (Sample) E Appendix E to Part 110 National Defense Department of Defense OFFICE OF THE SECRETARY... COMMUTATION INSTEAD OF UNIFORMS FOR MEMBERS OF THE SENIOR RESERVE OFFICERS' TRAINING CORPS Pt. 110, App. E Appendix E to Part 110—Application of 4-Week Summer Field Training Formula (Sample) Zone I Zone II Total...
How large a training set is needed to develop a classifier for microarray data?
Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M
2008-01-01
A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Comparative study of feature selection with ensemble learning using SOM variants
NASA Astrophysics Data System (ADS)
Filali, Ameni; Jlassi, Chiraz; Arous, Najet
2017-03-01
Ensemble learning has succeeded in the growth of stability and clustering accuracy, but their runtime prohibits them from scaling up to real-world applications. This study deals the problem of selecting a subset of the most pertinent features for every cluster from a dataset. The proposed method is another extension of the Random Forests approach using self-organizing maps (SOM) variants to unlabeled data that estimates the out-of-bag feature importance from a set of partitions. Every partition is created using a various bootstrap sample and a random subset of the features. Then, we show that the process internal estimates are used to measure variable pertinence in Random Forests are also applicable to feature selection in unsupervised learning. This approach aims to the dimensionality reduction, visualization and cluster characterization at the same time. Hence, we provide empirical results on nineteen benchmark data sets indicating that RFS can lead to significant improvement in terms of clustering accuracy, over several state-of-the-art unsupervised methods, with a very limited subset of features. The approach proves promise to treat with very broad domains.
White-light diffraction phase microscopy at doubled space-bandwidth product.
Shan, Mingguang; Kandel, Mikhail E; Majeed, Hassaan; Nastasa, Viorel; Popescu, Gabriel
2016-12-12
White light diffraction microscopy (wDPM) is a quantitative phase imaging method that benefits from both temporal and spatial phase sensitivity, granted, respectively, by the common-path geometry and white light illumination. However, like all off-axis quantitative phase imaging methods, wDPM is characterized by a reduced space-bandwidth product compared to phase shifting approaches. This happens essentially because the ultimate resolution of the image is governed by the period of the interferogram and not just the diffraction limit. As a result, off-axis techniques generates single-shot, i.e., high time-bandwidth, phase measurements, at the expense of either spatial resolution or field of view. Here, we show that combining phase-shifting and off-axis, the original space-bandwidth is preserved. Specifically, we developed phase-shifting diffraction phase microscopy with white light, in which we measure and combine two phase shifted interferograms. Due to the white light illumination, the phase images are characterized by low spatial noise, i.e., <1nm pathlength. We illustrate the operation of the instrument with test samples, blood cells, and unlabeled prostate tissue biopsy.
Lin, Yi-Reng; Huang, Mei-Fang; Wu, You-Ying; Liu, Meng-Chieh; Huang, Jing-Heng; Chen, Ziyu; Shiue, Yow-Ling; Wu, Chia-En; Liang, Shih-Shin
2017-09-01
In this work, we synthesized internal standards for four garlic organosulfur compounds (OSCs) by reductive amination with 13 C, D 2 -formaldehyde, and developed an isotope dilution analysis method to quantitate these organosulfur components in garlic samples. Internal standards were synthesized for internal absolute quantification of S-allylcysteine (SAC), S-allylcysteine sulfoxide (alliin), S-methylcysteine (SMC), and S-ethylcysteine (SEC). We used a multiple reaction monitoring (MRM) to detect 13 C, D 2 -formaldehyde-modified OSCs by ultrahigh-performance liquid phase chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS) and obtained MS spectra showing different ratios of 13 C, D 2 -formaldehyde-modified and H 2 -formaldehyde-modified compounds. The resulting labeled and unlabeled OSCs were exhibited correlation coefficient (R 2 ) ranged from 0.9989 to 0.9994, respectively. The average recoveries for four OSCs at three concentration levels ranged from 89% to 105%. By 13 C, D 2 -formaldehyde and sodium cyanoborohydride, the reductive amination-based method can be utilized to generate novel internal standard for isotope dilution and to extend the quantitative application. Copyright © 2017 Elsevier Ltd. All rights reserved.
1997-11-01
66 TRAINING AND TESTING RELATED INJURIES ................ 68 iv Pre-tests ................................................ 68 T raining...74 BASIC TRAINING VS. THE EXPERIMENTAL PROGRAM ......... 74 INDIVIDUAL DIFFERENCES IN RESPONSIVENESS TO TRAINING.. 74 INJURY RISK IN HIGH-LEVEL...USED FOR TRAINING ............ SAMPLE WORKOUTS .................................... vi Sample Monday and Thursday Weightlifting and Running W orkout
Canon, Abbey J; Lauterbach, Nicholas; Bates, Jessica; Skoland, Kristin; Thomas, Paul; Ellingson, Josh; Ruston, Chelsea; Breuer, Mary; Gerardy, Kimberlee; Hershberger, Nicole; Hayman, Kristen; Buckley, Alexis; Holtkamp, Derald; Karriker, Locke
2017-06-15
OBJECTIVE To develop and evaluate a pyramid training method for teaching techniques for collection of diagnostic samples from swine. DESIGN Experimental trial. SAMPLE 45 veterinary students. PROCEDURES Participants went through a preinstruction assessment to determine their familiarity with the equipment needed and techniques used to collect samples of blood, nasal secretions, feces, and oral fluid from pigs. Participants were then shown a series of videos illustrating the correct equipment and techniques for collecting samples and were provided hands-on pyramid-based instruction wherein a single swine veterinarian trained 2 or 3 participants on each of the techniques and each of those participants, in turn, trained additional participants. Additional assessments were performed after the instruction was completed. RESULTS Following the instruction phase, percentages of participants able to collect adequate samples of blood, nasal secretions, feces, and oral fluid increased, as did scores on a written quiz assessing participants' ability to identify the correct equipment, positioning, and procedures for collection of samples. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that the pyramid training method may be a feasible way to rapidly increase diagnostic sampling capacity during an emergency veterinary response to a swine disease outbreak.
Ikeda, Mitsuru
2017-01-01
Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively. PMID:29090077
Deep learning algorithms for detecting explosive hazards in ground penetrating radar data
NASA Astrophysics Data System (ADS)
Besaw, Lance E.; Stimac, Philip J.
2014-05-01
Buried explosive hazards (BEHs) have been, and continue to be, one of the most deadly threats in modern conflicts. Current handheld sensors rely on a highly trained operator for them to be effective in detecting BEHs. New algorithms are needed to reduce the burden on the operator and improve the performance of handheld BEH detectors. Traditional anomaly detection and discrimination algorithms use "hand-engineered" feature extraction techniques to characterize and classify threats. In this work we use a Deep Belief Network (DBN) to transcend the traditional approaches of BEH detection (e.g., principal component analysis and real-time novelty detection techniques). DBNs are pretrained using an unsupervised learning algorithm to generate compressed representations of unlabeled input data and form feature detectors. They are then fine-tuned using a supervised learning algorithm to form a predictive model. Using ground penetrating radar (GPR) data collected by a robotic cart swinging a handheld detector, our research demonstrates that relatively small DBNs can learn to model GPR background signals and detect BEHs with an acceptable false alarm rate (FAR). In this work, our DBNs achieved 91% probability of detection (Pd) with 1.4 false alarms per square meter when evaluated on anti-tank and anti-personnel targets at temperate and arid test sites. This research demonstrates that DBNs are a viable approach to detect and classify BEHs.
Characterization of Bovine Brain ATPase
1988-07-01
Experiment D . Only very small amounts of (3H)-ligand (0.8 fmole/mg protein) were observed to bind to the toxin as indicated by Experiments E and F. Since...B. Synaptic Membranes + 3H-Ligand + 7.6 Unlabelled Ltgand C. Toxin + Synaptic Membranes + 3H-Ligand 7.5 D . Toxin + Synaptic Membranes + 3H-Ligand...Europaeus Agglutinin L-Fucose SBA= Soy Bean Agglutinin D -Galactose LPA= Limulus Polyphemus Agglutinin N-Acetylgalactosamine Con-A= Concanavalin-A, D -Glucose
Acetylcholinesterase and Acetylcholine Receptor.
1986-01-21
Cl3CCH2OH) binds similarly to its carbon analogue, neopentyl alcohol, and chloral binds better than its carbon analogue, pivalaldehyde. In the latter case...III. Since the 3H-DFP was obtained in propylene glycol , the stability of DFP in the hydroxylic solvent and thus its true concentration was investi...solution of unlabeled DFP in propylene glycol was obtained from NEN for use in model experi- ments and found to have no inhibiting activity. We turned
Techniques for Exploiting Unlabeled Data
2008-10-01
Moore, Ar- jit Singh, Jure Leskovec, Stano Funiak, Andreas Krause, Gaurav Veda, John Lang- ford, R. Ravi, Peter Lee, Srinath Sridhar, Virginia Vassilevska...information. However, in the past few decades for many tasks the supply of information has outpaced our ability to effectively utilize it. For example in...function which contains kernel functions as a sub-class and show that effective learning can be done in this framework. Although the work in this area is
Electrochemical impedance study of the interaction of metal ions with unlabeled PNA.
Gao, Lan; Li, Congjuan; Li, Xiaohong; Kraatz, Heinz-Bernhard
2010-09-14
The interactions of the metal ions Mg(2+), Zn(2+), Ni(2+), and Co(2+) with thin films of peptide nucleic acids (PNAs) were studied by electrochemical impedance spectroscopy (EIS), and the results show that Zn(2+), Ni(2+) and Co(2+) interacted favorably with the PNA film involving the backbone and the nucleobases with the exception of Mg(2+) for which the interaction with the backbone appears to be dominant.
Battlefield trauma care then and now: a decade of Tactical Combat Casualty Care.
2012-01-01
use of tranexamic acid to help prevent death from noncompressible hemorrhage35 37 The use of the Combat Ready Clamp to control junctional...hemorrhage38 The use as described for fentanyl lozenges, tranexamic acid , moxifloxacin, ertapenem, and cefotetan is unlabeled use of Food and Drug Administration...2011;377:1096 1101. 35. Morrison JJ, Dubose JJ, Rasmussen TE, Midwinter MJ. Military appli- cation of tranexamic acid in trauma emergency resuscitation
DOT National Transportation Integrated Search
1999-12-01
This manual has been developed as a training guide for field and laboratory technicians responsible for sampling and testing of soils used in roadway construction. Soils training and certification will increase the knowledge of laboratory, production...
[Perceptions about continuous training of Chilean health care teachers].
Pérez V, Cristhian; Fasce H, Eduardo; Coloma N, Katherine; Vaccarezza G, Giulietta; Ortega B, Javiera
2013-06-01
Continuous training of teachers, in discipline and pedagogical topics, is a key step to improve the quality of educational processes. To report the perception of Chilean teachers of undergraduate health care programs, about continuous training activities. Twenty teachers working at different undergraduate health care programs in Chile were interviewed. Maximum variation and theoretical sampling methods were used to select the sample. Data was analyzed by open coding, according to the Grounded Theory guidelines. Nine categories emerged from data analysis: Access to continuous training, meaning of training in discipline, activities of continuous training in discipline, meaning of continuous training in pedagogy, kinds of continuous training in pedagogy, quality of continuous training in pedagogy, ideal of continuous training in pedagogy, outcomes of continuous training in pedagogy and needs for continuous training in pedagogy. Teachers of health care programs prefer to participate in contextualized training activities. Also, they emphasize their need of training in evaluation and teaching strategies.
Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance
Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara
2016-01-01
Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs’ greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately. PMID:26863620
Hou, Sen; Sun, Lili; Wieczorek, Stefan A; Kalwarczyk, Tomasz; Kaminski, Tomasz S; Holyst, Robert
2014-01-15
Fluorescent double-stranded DNA (dsDNA) molecules labeled at both ends are commonly produced by annealing of complementary single-stranded DNA (ssDNA) molecules, labeled with fluorescent dyes at the same (3' or 5') end. Because the labeling efficiency of ssDNA is smaller than 100%, the resulting dsDNA have two, one or are without a dye. Existing methods are insufficient to measure the percentage of the doubly-labeled dsDNA component in the fluorescent DNA sample and it is even difficult to distinguish the doubly-labeled DNA component from the singly-labeled component. Accurate measurement of the percentage of such doubly labeled dsDNA component is a critical prerequisite for quantitative biochemical measurements, which has puzzled scientists for decades. We established a fluorescence correlation spectroscopy (FCS) system to measure the percentage of doubly labeled dsDNA (PDL) in the total fluorescent dsDNA pool. The method is based on comparative analysis of the given sample and a reference dsDNA sample prepared by adding certain amount of unlabeled ssDNA into the original ssDNA solution. From FCS autocorrelation functions, we obtain the number of fluorescent dsDNA molecules in the focal volume of the confocal microscope and PDL. We also calculate the labeling efficiency of ssDNA. The method requires minimal amount of material. The samples have the concentration of DNA in the nano-molar/L range and the volume of tens of microliters. We verify our method by using restriction enzyme Hind III to cleave the fluorescent dsDNA. The kinetics of the reaction depends strongly on PDL, a critical parameter for quantitative biochemical measurements. Copyright © 2013 Elsevier B.V. All rights reserved.
Formation of amino acids and nucleotide bases in a Titan atmosphere simulation experiment.
Hörst, S M; Yelle, R V; Buch, A; Carrasco, N; Cernogora, G; Dutuit, O; Quirico, E; Sciamma-O'Brien, E; Smith, M A; Somogyi, A; Szopa, C; Thissen, R; Vuitton, V
2012-09-01
The discovery of large (>100 u) molecules in Titan's upper atmosphere has heightened astrobiological interest in this unique satellite. In particular, complex organic aerosols produced in atmospheres containing C, N, O, and H, like that of Titan, could be a source of prebiotic molecules. In this work, aerosols produced in a Titan atmosphere simulation experiment with enhanced CO (N(2)/CH(4)/CO gas mixtures of 96.2%/2.0%/1.8% and 93.2%/5.0%/1.8%) were found to contain 18 molecules with molecular formulae that correspond to biological amino acids and nucleotide bases. Very high-resolution mass spectrometry of isotopically labeled samples confirmed that C(4)H(5)N(3)O, C(4)H(4)N(2)O(2), C(5)H(6)N(2)O(2), C(5)H(5)N(5), and C(6)H(9)N(3)O(2) are produced by chemistry in the simulation chamber. Gas chromatography-mass spectrometry (GC-MS) analyses of the non-isotopic samples confirmed the presence of cytosine (C(4)H(5)N(3)O), uracil (C(5)H(4)N(2)O(2)), thymine (C(5)H(6)N(2)O(2)), guanine (C(5)H(5)N(5)O), glycine (C(2)H(5)NO(2)), and alanine (C(3)H(7)NO(2)). Adenine (C(5)H(5)N(5)) was detected by GC-MS in isotopically labeled samples. The remaining prebiotic molecules were detected in unlabeled samples only and may have been affected by contamination in the chamber. These results demonstrate that prebiotic molecules can be formed by the high-energy chemistry similar to that which occurs in planetary upper atmospheres and therefore identifies a new source of prebiotic material, potentially increasing the range of planets where life could begin.
Wang, Sha-Sha; Thornton, Keith; Kuhn, Andrew M; Nadeau, James G; Hellyer, Tobin J
2003-10-01
The BD ProbeTec ET System is based on isothermal strand displacement amplification (SDA) of target nucleic acid coupled with homogeneous real-time detection using fluorescent probes. We have developed a novel, rapid method using this platform that incorporates a universal detection format for identification of single-nucleotide polymorphisms (SNPs) and other genotypic variations. The system uses a common pair of fluorescent Detector Probes in conjunction with unlabeled allele-specific Adapter Primers and a universal buffer chemistry to permit analysis of multiple SNP loci under generic assay conditions. We used Detector Probes labeled with different dyes to facilitate differentiation of two alternative alleles in a single reaction with no postamplification manipulation. We analyzed six SNPs within the human beta(2)-adrenergic receptor (beta(2)AR) gene, using whole blood, buccal swabs, and urine samples, and compared results with those obtained by DNA sequencing. Unprocessed whole blood was successfully genotyped with as little as 0.1-1 micro L of sample per reaction. All six beta(2)AR assays were able to accommodate >/==" BORDER="0">20 micro L of unprocessed whole blood. For the 14 individuals tested, genotypes determined with the six beta(2)AR assays agreed with DNA sequencing results. SDA-based allelic differentiation on the BD ProbeTec ET System can detect SNPs rapidly, using whole blood, buccal swabs, or urine.
Melamine milk powder and infant formula sold in East Africa.
Schoder, Dagmar
2010-09-01
This is the first study proving the existence of melamine in milk powder and infant formula exported to the African market. A total of 49 milk powder batches were collected in Dar-es-Salaam (Tanzania, East Africa), the center of international trade in East Africa, which serves as a commercial bottleneck and shipment hub for sub-Saharan, Central, and East Africa. Two categories of samples were collected between October and December 2008, immediately after the melamine contamination of Chinese products became public: (i) market brands of all international companies supplying the East African market and (ii) illegally sold products from informal channels. Melamine concentration was determined with the AgraQuant Melamine Sensitive Assay. Despite the national import prohibition of Chinese milk products and unlabeled milk powder in Tanzania, 11% (22 of 200) of inspected microretailers sold milk powder on the local black market. Manufacturers could be identified for only 55% (27) of the 49 investigated batches. Six percent (3 of 49) of all samples and 11% (3 of 27) of all international brand name products tested revealed melamine concentrations up to 5.5 mg/kg of milk powder. This amount represents about twice the tolerable daily intake as suggested by the U.S Food and Drug Administration. Based on our study, we can assume that the number of affected children in Africa is substantial.
NASA Technical Reports Server (NTRS)
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
NASA Astrophysics Data System (ADS)
Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.
2005-05-01
Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.
ERIC Educational Resources Information Center
LeMaster, W. Dean; Gray, Thomas H.
The purpose of this study was to develop a screening procedure for undergraduate pilot training (UPT). This procedure was based upon the use of ground-based instrument trainers in which UPT candidates, naive to flying, were evaluated in their performance of job sample tasks; i.e., basic instrument flying. Training and testing sessions were…
Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong
2016-01-01
Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities.
A comparison of US and Australian men's values and preferences for PSA screening.
Howard, Kirsten; Brenner, Alison T; Lewis, Carmen; Sheridan, Stacey; Crutchfield, Trisha; Hawley, Sarah; Nielsen, Matthew E; Pignone, Michael P
2013-10-05
Patient preferences derived from an assessment of values can help inform the design of screening programs, but how best to do so, and whether such preferences differ cross-nationally, has not been well-examined. The objective of this study was to compare the values and preferences of Australian and US men for PSA (prostate specific antigen) screening. We used an internet based survey of men aged 50-75 with no personal or family history of prostate cancer recruited from on-line panels of a survey research organization in the US and Australia. Participants viewed information on prostate cancer and prostate cancer screening with PSA testing then completed a values clarification task that included information on 4 key attributes: chance of 1) being diagnosed with prostate cancer, 2) dying from prostate cancer, 3) requiring a biopsy as a result of screening, and 4) developing impotence or incontinence as a result of screening. The outcome measures were self reported most important attribute, unlabelled screening test choice, and labelled screening intent, assessed on post-task questionnaires. We enrolled 911 participants (US:456; AU:455), mean age was 59.7; 88.0% were white; 36.4% had completed at least a Bachelors' degree; 42.0% reported a PSA test in the past 12 months. Australian men were more likely to be white and to have had recent screening. For both US and Australian men, the most important attribute was the chance of dying from prostate cancer. Unlabelled post-task preference for the PSA screening-like option was greater for Australian (39.1%) compared to US (26.3%) participants (adjusted OR 1.68 (1.28-2.22)). Labelled intent for screening was high for both countries: US:73.7%, AUS:78.0% (p = 0.308). There was high intent for PSA screening in both US and Australian men; fewer men in each country chose the PSA-like option on the unlabelled question. Australian men were somewhat more likely to prefer PSA screening. Men in both countries did not view the increased risk of diagnosis as a negative aspect, suggesting more work needs to be done on communicating the concept of overdiagnosis to men facing a PSA screening decision. This trial was registered at ClinicalTrials.gov (NCT01558583).
Microbial transformation of nitroaromatics in surface soils and aquifer materials
Bradley, P.M.; Chapelle, F.H.; Landmeyer, J.E.; Schumacher, J.G.
1994-01-01
Microorganisms indigenous to surface soils and aquifer materials collected at a munitions-contaminated site transformed 2,4,6-trinitrotoluene (TNT), 2,4-dinitrotoluene (2,4-DNT), and 2,6-dinitrotoluene (2,6-DNT) to amino-nitro intermediates within 20 to 70 days. Carbon mineralization studies with both unlabeled (TNT, 2,4-DNT, and 2,6-DNT) and radiolabeled ([14C]TNT) substrates indicated that a significant fraction of these source compounds was degraded to CO2.
Methods for Integrating Environmental Awareness Training into Army Programs of Instruction
1993-06-01
generations. iv NTIS CRA&I ) F -IC TAB U.a’mot’::ed El By .. . ... ....... By .......................... ...... . .. DiO t, ib., tion I CONTENTS...Training Support Package ................... E-1-E-19 Appendix F . Sample of Officer Basic Course Instructor’s Lesson Plan with Embedded Information... F -1- F -7 Appendix G. Samples of Situational Training Exercises ........... G-1-G 9 Appendix H. Samples of Pre-Command Course Guest Speaker
Semi-supervised protein subcellular localization.
Xu, Qian; Hu, Derek Hao; Xue, Hong; Yu, Weichuan; Yang, Qiang
2009-01-30
Protein subcellular localization is concerned with predicting the location of a protein within a cell using computational method. The location information can indicate key functionalities of proteins. Accurate predictions of subcellular localizations of protein can aid the prediction of protein function and genome annotation, as well as the identification of drug targets. Computational methods based on machine learning, such as support vector machine approaches, have already been widely used in the prediction of protein subcellular localization. However, a major drawback of these machine learning-based approaches is that a large amount of data should be labeled in order to let the prediction system learn a classifier of good generalization ability. However, in real world cases, it is laborious, expensive and time-consuming to experimentally determine the subcellular localization of a protein and prepare instances of labeled data. In this paper, we present an approach based on a new learning framework, semi-supervised learning, which can use much fewer labeled instances to construct a high quality prediction model. We construct an initial classifier using a small set of labeled examples first, and then use unlabeled instances to refine the classifier for future predictions. Experimental results show that our methods can effectively reduce the workload for labeling data using the unlabeled data. Our method is shown to enhance the state-of-the-art prediction results of SVM classifiers by more than 10%.
Zeng, Y; Shabalin, Y; Szumilo, T; Pastuszak, I; Drake, R R; Elbein, A D
1996-07-15
The chemical synthesis and utilization of two photoaffinity analogs, 125I-labeled 5-[3-(p-azidosalicylamido)-1-propenyl]-UDP-GlcNAc and -UDP-GalNAc, is described. Starting with either UDP-GlcNAc or UDP-GalNAc, the synthesis involved the preparation of the 5-mercuri-UDP-HexNAc and then attachment of an allylamine to the 5 position to give 5-(3-amino)allyl-UDP-HexNAc. This was followed by acylation with N-hydroxysuccinimide p-aminosalicylic acid to form the final product, i.e., 5-[3-(p-azidosalicylamido)-1-propenyl]-UDP-GlcNAc or UDP-GalNAc. These products could then be iodinated with chloramine T to give the 125I-derivatives. Both the UDP-GlcNAc and the UDP-GalNAc derivatives reacted in a concentration-dependent manner with a highly purified UDP-HexNAc pyrophosphorylase, and both specifically labeled the subunit(s) of this protein. The labeling of the protein by the UDP-GlcNAc derivative was inhibited in dose-dependent fashion by either unlabeled UDP-GlcNAc or unlabeled UDP-GalNAc. Likewise, labeling with the UDP-GalNAc probe was blocked by either UDP-GlcNAc or UDP-GalNAc. The UDP-GlcNAc probe also specifically labeled a partially purified preparation of GlcNAc transferase I.
Zepeda, Isaac; Sánchez-López, Rosana; Kunkel, Joseph G; Bañuelos, Luis A; Hernández-Barrera, Alejandra; Sánchez, Federico; Quinto, Carmen; Cárdenas, Luis
2014-03-01
Legume plants secrete signaling molecules called flavonoids into the rhizosphere. These molecules activate the transcription of rhizobial nod genes, which encode proteins involved in the synthesis of signaling compounds named Nod factors (NFs). NFs, in turn, trigger changes in plant gene expression, cortical cell dedifferentiation and mitosis, depolarization of the root hair cell membrane potential and rearrangement of the actin cytoskeleton. Actin polymerization plays an important role in apical growth in hyphae and pollen tubes. Using sublethal concentrations of fluorescently labeled cytochalasin D (Cyt-Fl), we visualized the distribution of filamentous actin (F-actin) plus ends in living Phaseolus vulgaris and Arabidopsis root hairs during apical growth. We demonstrated that Cyt-Fl specifically labeled the newly available plus ends of actin microfilaments, which probably represent sites of polymerization. The addition of unlabeled competing cytochalasin reduced the signal, suggesting that the labeled and unlabeled forms of the drug bind to the same site on F-actin. Exposure to Rhizobium etli NFs resulted in a rapid increase in the number of F-actin plus ends in P. vulgaris root hairs and in the re-localization of F-actin plus ends to infection thread initiation sites. These data suggest that NFs promote the formation of F-actin plus ends, which results in actin cytoskeleton rearrangements that facilitate infection thread formation.
Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.
Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku
2017-07-01
Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.
Louwagie, Mathilde; Kieffer-Jaquinod, Sylvie; Dupierris, Véronique; Couté, Yohann; Bruley, Christophe; Garin, Jérôme; Dupuis, Alain; Jaquinod, Michel; Brun, Virginie
2012-07-06
Accurate quantification of pure peptides and proteins is essential for biotechnology, clinical chemistry, proteomics, and systems biology. The reference method to quantify peptides and proteins is amino acid analysis (AAA). This consists of an acidic hydrolysis followed by chromatographic separation and spectrophotometric detection of amino acids. Although widely used, this method displays some limitations, in particular the need for large amounts of starting material. Driven by the need to quantify isotope-dilution standards used for absolute quantitative proteomics, particularly stable isotope-labeled (SIL) peptides and PSAQ proteins, we developed a new AAA assay (AAA-MS). This method requires neither derivatization nor chromatographic separation of amino acids. It is based on rapid microwave-assisted acidic hydrolysis followed by high-resolution mass spectrometry analysis of amino acids. Quantification is performed by comparing MS signals from labeled amino acids (SIL peptide- and PSAQ-derived) with those of unlabeled amino acids originating from co-hydrolyzed NIST standard reference materials. For both SIL peptides and PSAQ standards, AAA-MS quantification results were consistent with classical AAA measurements. Compared to AAA assay, AAA-MS was much faster and was 100-fold more sensitive for peptide and protein quantification. Finally, thanks to the development of a labeled protein standard, we also extended AAA-MS analysis to the quantification of unlabeled proteins.
Safe semi-supervised learning based on weighted likelihood.
Kawakita, Masanori; Takeuchi, Jun'ichi
2014-05-01
We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')
NASA Astrophysics Data System (ADS)
Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun
2017-11-01
The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.
Characterization and distribution of natriuretic peptide receptors in the rat uterus.
Dos Reis, A M; Fujio, N; Dam, T V; Mukaddam-Daher, S; Jankowski, M; Tremblay, J; Gutkowska, J
1995-10-01
Atrial natriuretic peptide (ANP) receptors were characterized in rat uterus. The binding of [125I]ANP to uterine membranes was completely competed for by increasing concentrations of unlabeled ANP (Kd = 0.39 nM) and brain natriuretic peptide (Kd = 1.24 nM) and partially by C-type natriuretic peptide (CNP; Kd = 80.4 nM), but not by C-ANF. Also, [125I]Tyr-CNP bound to uterine membranes was completely competed by unlabeled CNP (Kd = 1.12 nM). Cross-linking of [125I]ANP to uterine membranes revealed the presence of one band of 130 kilodaltons, corresponding to the guanylyl cyclase (GC-A and/or GC-B) subtypes of natriuretic peptide receptors. The presence of messenger RNA coding for genes of both GC-A and GC-B receptors was shown by quantitative reverse transcriptase polymerase chain reaction. Furthermore, ANP and, to a lesser degree, CNP stimulated the production of cGMP in rat uterus. Autoradiographic studies localized the highest binding of [125I]ANP in the endometrium, whereas [125I]Tyr-CNP binding was distributed in the endometrium as well as in the myometrium. These results demonstrate that rat uterine ANP receptors are of the guanylyl cyclase-coupled subtypes. The uterus is a target of natriuretic peptides where ANP induces its biological effects through the production of cGMP.
Whole slide imaging of unstained tissue using lensfree microscopy
NASA Astrophysics Data System (ADS)
Morel, Sophie Nhu An; Hervé, Lionel; Bordy, Thomas; Cioni, Olivier; Delon, Antoine; Fromentin, Catherine; Dinten, Jean-Marc; Allier, Cédric
2016-04-01
Pathologist examination of tissue slides provides insightful information about a patient's disease. Traditional analysis of tissue slides is performed under a binocular microscope, which requires staining of the sample and delays the examination. We present a simple cost-effective lensfree imaging method to record 2-4μm resolution wide-field (10 mm2 to 6 cm2) images of unstained tissue slides. The sample processing time is reduced as there is no need for staining. A wide field of view (10 mm2) lensfree hologram is recorded in a single shot and the image is reconstructed in 2s providing a very fast acquisition chain. The acquisition is multispectral, i.e. multiple holograms are recorded simultaneously at three different wavelengths, and a dedicated holographic reconstruction algorithm is used to retrieve both amplitude and phase. Whole tissue slides imaging is obtained by recording 130 holograms with X-Y translation stages and by computing the mosaic of a 25 x 25 mm2 reconstructed image. The reconstructed phase provides a phase-contrast-like image of the unstained specimen, revealing structures of healthy and diseased tissue. Slides from various organs can be reconstructed, e.g. lung, colon, ganglion, etc. To our knowledge, our method is the first technique that enables fast wide-field lensfree imaging of such unlabeled dense samples. This technique is much cheaper and compact than a conventional phase contrast microscope and could be made portable. In sum, we present a new methodology that could quickly provide useful information when a rapid diagnosis is needed, such as tumor margin identification on frozen section biopsies during surgery.
Örbom, Anders; Eriksson, Sophie E; Elgström, Erika; Ohlsson, Tomas; Nilsson, Rune; Tennvall, Jan; Strand, Sven-Erik
2013-08-01
The therapeutic effect of radioimmunotherapy depends on the distribution of the absorbed dose in relation to viable cancer cells within the tumor, which in turn is a function of the activity distribution. The aim of this study was to investigate the distribution of (177)Lu-DOTA-BR96 monoclonal antibodies targeting the Lewis Y antigen over 7 d using a syngeneic rat model of colon carcinoma. Thirty-eight tumor-bearing rats were intravenously given 25 or 50 MBq of (177)Lu-DOTA-BR96 per kilogram of body weight and were sacrificed 2, 8, 24, 48, 72, 96, 120, or 168 h after injection, with activity measured in blood and tumor samples. Adjacent cryosections of each tumor were analyzed in 3 ways: imaging using a silicon-strip detector for digital autoradiography, staining for histologic characterization, or staining to determine the distribution of the antigen, vasculature, and proliferating cells using immunohistochemistry. Absorbed-dose rate distribution images at the moment of sacrifice were calculated using the activity distribution and a point-dose kernel. The correlations between antigen expression and both activity uptake and absorbed-dose rate were calculated for several regions of interest in each tumor. Nine additional animals with tumors were given unlabeled antibody to evaluate possible immunologic effects. At 2-8 h after injection, activity was found in the tumor margins; at 24 h, in viable antigen-expressing areas within the tumor; and at 48 h and later, increasingly in antigen-negative areas of granulation tissue. The correlation between antigen expression and both the mean activity and the absorbed-dose rate in regions of interest changed from positive to negative after 24 h after injection. Antigen-negative areas also increased over time in animals injected with unlabeled BR96, compared with untreated tumors. The results indicate that viable Lewis Y-expressing tumor cells are most efficiently treated during the initial uptake period. The activity then seems to remain in these initial uptake regions after the elimination of tumor cells and formation of granulation tissue. Further studies using these techniques could aid in determining the effects of the intratumoral activity distribution on overall therapeutic efficacy.
Short communication: Ability of dogs to detect cows in estrus from sniffing saliva samples.
Fischer-Tenhagen, C; Tenhagen, B-A; Heuwieser, W
2013-02-01
Efficient estrus detection in high-producing dairy cows is a permanent challenge for successful reproductive performance. In former studies, dogs have been trained to identify estrus-specific odor in vaginal fluid, milk, urine, and blood samples under laboratory conditions with an accuracy of more than 80%. For on-farm utilization of estrus-detection dogs it would be beneficial in terms of hygiene and safety if dogs could identify cows from the feed alley. The objective of this proof of concept study was to test if dogs can be trained to detect estrus-specific scent in saliva of cows. Saliva samples were collected from cows in estrus and diestrus. Thirteen dogs of various breeds and both sexes were trained in this study. Five dogs had no experience in scent detection, whereas 8 dogs had been formerly trained for detection of narcotics or cancer. In the training and test situation, dogs had to detect 1 positive out of 4 samples. Dog training was based on positive reinforcement and dogs were rewarded with a clicker and food for indicating saliva samples of cows in estrus. A false indication was ignored and documented in the test situation. Dogs with and without prior training were trained for 1 and 5 d, respectively. For determining the accuracy of detection, the position of the positive sample was unknown to the dog handler, to avoid hidden cues to the dog. The overall percentage of correct positive indications was 57.6% (175/304), with a range from 40 (1 dog) to 75% (3 dogs). To our knowledge, this is the first indication that dogs are able to detect estrus-specific scent in saliva of cows. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Over-Selectivity as a Learned Response
ERIC Educational Resources Information Center
Reed, Phil; Petrina, Neysa; McHugh, Louise
2011-01-01
An experiment investigated the effects of different levels of task complexity in pre-training on over-selectivity in a subsequent match-to-sample (MTS) task. Twenty human participants were divided into two groups; exposed either to a 3-element, or a 9-element, compound stimulus as a sample during MTS training. After the completion of training,…
Dynamic spiking studies using the DNPH sampling train
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steger, J.L.; Knoll, J.E.
1996-12-31
The proposed aldehyde and ketone sampling method using aqueous 2,4-dinitrophenylhydrazine (DNPH) was evaluated in the laboratory and in the field. The sampling trains studied were based on the train described in SW 846 Method 0011. Nine compounds were evaluated: formaldehyde, acetaldehyde, quinone, acrolein, propionaldeyde, methyl isobutyl ketone, methyl ethyl ketone, acetophenone, and isophorone. In the laboratory, the trains were spiked both statistically and dynamically. Laboratory studies also investigated potential interferences to the method. Based on their potential to hydrolyze in acid solution to form formaldehyde, dimethylolurea, saligenin, s-trioxane, hexamethylenetetramine, and paraformaldehyde were investigated. Ten runs were performed using quadruplicate samplingmore » trains. Two of the four trains were dynamically spiked with the nine aldehydes and ketones. The test results were evaluated using the EPA method 301 criteria for method precision (< + pr - 50% relative standard deviation) and bias (correction factor of 1.00 + or - 0.30).« less
Wu, Dongrui; Lance, Brent J; Parsons, Thomas D
2013-01-01
Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both k nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing.
Wu, Dongrui; Lance, Brent J.; Parsons, Thomas D.
2013-01-01
Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing. PMID:23437188
Train the Trainer. Facilitator Guide Sample. Basic Blueprint Reading (Chapter One).
ERIC Educational Resources Information Center
Saint Louis Community Coll., MO.
This publication consists of three sections: facilitator's guide--train the trainer, facilitator's guide sample--Basic Blueprint Reading (Chapter 1), and participant's guide sample--basic blueprint reading (chapter 1). Section I addresses why the trainer should learn new classroom techniques; lecturing versus facilitating; learning styles…
Some approaches to optimal cluster labeling of aerospace imagery
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1980-01-01
Some approaches are presented to the problem of labeling clusters using information from a given set of labeled and unlabeled aerospace imagery patterns. The assignment of class labels to the clusters is formulated as the determination of the best assignment over all possible ones with respect to some criterion. Cluster labeling is also viewed as the probability of correct labeling with a maximization of likelihood function. Results of the application of these techniques in the processing of remotely sensed multispectral scanner imagery data are presented.
Rapid screening for plasmid DNA.
Hughes, C; Meynell, G G
1977-03-07
A procedure is described for demonstrating plasmid DNA and its molecular weight, based on rate zonal centrifugation of unlabelled DNA in neutral sucrose gradients containing a low concentration of ethidium bromide. Each DNA species is then visualized as a discrete fluorescent band when the centrifuge tube is illuminated with ultra-violet light. Plasmids exist as closed circular and as relaxed circular molecules, which sediment separately, but during preparation of lysates, closed circular molecules are nicked so that each plasmid forms only a single band of relaxed circles within the gradient.
NASA Astrophysics Data System (ADS)
Golfinopoulos, A.; Soupioni, M.; Kanellaki, M.; Koutinas, A. A.
2008-08-01
The effect of initial lactose concentration on lactose uptake rate by kefir free cells, during the lactose fermentation, was studied in this work. For the investigation 14C-labelled lactose was used due to the fact that labeled and unlabeled molecules are fermented in the same way. The results illustrated lactose uptake rates are about up to two fold higher at lower initial ∘Bé densities as compared with higher initial ∘Bé densities.
Synthesis of (125) I-lamivudine and (125) I-lamivudine-ursodeoxycholic acid codrug.
Motaleb, M A; Abo-Kul, M; Ibrahim, Samy M; Saad, Shokry M; Arafat, Muhammad
2016-09-01
The preparation of (125) I-lamivudine ((125) I-3TC) and (125) I-lamivudine-ursodeoxycholic acid codrug ((125) I-3TC-UDCA), suitable for comparative biodistribution studies, is described. The synthesis of the unlabeled precursor 3TC-UDCA proceeds in an 11.6% yield, and the radiolabelling yields for (125) I-3TC and (125) I-3TC-UDCA were 89 and 92%, respectively. The final products are radiochemically pure (greater than 98%). Copyright © 2016 John Wiley & Sons, Ltd.
Evaluation of a Traffic Sign Detector by Synthetic Image Data for Advanced Driver Assistance Systems
NASA Astrophysics Data System (ADS)
Hanel, A.; Kreuzpaintner, D.; Stilla, U.
2018-05-01
Recently, several synthetic image datasets of street scenes have been published. These datasets contain various traffic signs and can therefore be used to train and test machine learning-based traffic sign detectors. In this contribution, selected datasets are compared regarding ther applicability for traffic sign detection. The comparison covers the process to produce the synthetic images and addresses the virtual worlds, needed to produce the synthetic images, and their environmental conditions. The comparison covers variations in the appearance of traffic signs and the labeling strategies used for the datasets, as well. A deep learning traffic sign detector is trained with multiple training datasets with different ratios between synthetic and real training samples to evaluate the synthetic SYNTHIA dataset. A test of the detector on real samples only has shown that an overall accuracy and ROC AUC of more than 95 % can be achieved for both a small rate of synthetic samples and a large rate of synthetic samples in the training dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasztor, G.; Schmidt, C.
The behavior of NbTi superconductors under dynamic mechanical stress was investigated. A training effect was found in short-sample tests when the conductor was strained in a magnetic field and with a transport current applied. Possible mechanisms are discussed which were proposed to explain training in short samples and in magnets. A stress-induced microplastic as well as an incomplete pseudoelastic behavior of NbTi was detected by monitoring acoustic emission. The experiments support the hypothesis that microplastic or shape memory effects in NbTi involving dislocation processes are responsible for training. The minimum energy needed to induce a normal transition in short-sample testsmore » is calculated with a computer program, which gives the exact solution of the heat equation. A prestrain treatment of the conductor at room temperature is shown to be a simple method of reducing training of short samples and of magnets. This is a direct proof that the same mechanisms are involved in both cases.« less
Sample selection via angular distance in the space of the arguments of an artificial neural network
NASA Astrophysics Data System (ADS)
Fernández Jaramillo, J. M.; Mayerle, R.
2018-05-01
In the construction of an artificial neural network (ANN) a proper data splitting of the available samples plays a major role in the training process. This selection of subsets for training, testing and validation affects the generalization ability of the neural network. Also the number of samples has an impact in the time required for the design of the ANN and the training. This paper introduces an efficient and simple method for reducing the set of samples used for training a neural network. The method reduces the required time to calculate the network coefficients, while keeping the diversity and avoiding overtraining the ANN due the presence of similar samples. The proposed method is based on the calculation of the angle between two vectors, each one representing one input of the neural network. When the angle formed among samples is smaller than a defined threshold only one input is accepted for the training. The accepted inputs are scattered throughout the sample space. Tidal records are used to demonstrate the proposed method. The results of a cross-validation show that with few inputs the quality of the outputs is not accurate and depends on the selection of the first sample, but as the number of inputs increases the accuracy is improved and differences among the scenarios with a different starting sample have and important reduction. A comparison with the K-means clustering algorithm shows that for this application the proposed method with a smaller number of samples is producing a more accurate network.
ANALYSIS RESULTS FOR BUILDING 241 702-AZ A TRAIN
DOE Office of Scientific and Technical Information (OSTI.GOV)
DUNCAN JB; FRYE JM; COOKE CA
2006-12-13
This report presents the analyses results for three samples obtained under RPP-PLAN-28509, Sampling and Analysis Plan for Building 241 702-AZ A Train. The sampling and analysis was done in response to problem evaluation request number PER-2004-6139, 702-AZ Filter Rooms Need Radiological Cleanup Efforts.
Preston, Tom
2014-01-01
This paper discusses some of the recent improvements in instrumentation used for stable isotope tracer measurements in the context of measuring retinol stores, in vivo. Tracer costs, together with concerns that larger tracer doses may perturb the parameter under study, demand that ever more sensitive mass spectrometric techniques are developed. GCMS is the most widely used technique. It has high sensitivity in terms of sample amount and uses high resolution GC, yet its ability to detect low isotope ratios is limited by background noise. LCMSMS may become more accessible for tracer studies. Its ability to measure low level stable isotope tracers may prove superior to GCMS, but it is isotope ratio MS (IRMS) that has been designed specifically for low level stable isotope analysis through accurate analysis of tracer:tracee ratios (the tracee being the unlabelled species). Compound-specific isotope analysis, where GC is interfaced to IRMS, is gaining popularity. Here, individual 13C-labelled compounds are separated by GC, combusted to CO2 and transferred on-line for ratiometric analysis by IRMS at the ppm level. However, commercially-available 13C-labelled retinol tracers are 2 - 4 times more expensive than deuterated tracers. For 2H-labelled compounds, GC-pyrolysis-IRMS has now become more generally available as an operating mode on the same IRMS instrument. Here, individual compounds are separated by GC and pyrolysed to H2 at high temperature for analysis by IRMS. It is predicted that GC-pyrolysis-IRMS will facilitate low level tracer procedures to measure body retinol stores, as has been accomplished in the case of fatty acids and amino acids. Sample size requirements for GC-P-IRMS may exceed those of GCMS, but this paper discusses sample preparation procedures and predicts improvements, particularly in the efficiency of sample introduction.
Manuilov, Anton V; Radziejewski, Czeslaw H
2011-01-01
Comparability studies lie at the heart of assessments that evaluate differences amongst manufacturing processes and stability studies of protein therapeutics. Low resolution chromatographic and electrophoretic methods facilitate quantitation, but do not always yield detailed insight into the effect of the manufacturing change or environmental stress. Conversely, mass spectrometry (MS) can provide high resolution information on the molecule, but conventional methods are not very quantitative. This gap can be reconciled by use of a stable isotope-tagged reference standard (SITRS), a version of the analyte protein that is uniformly labeled with 13C6-arginine and 13C6-lysine. The SITRS serves as an internal control that is trypsin-digested and analyzed by liquid chromatography (LC)-MS with the analyte sample. The ratio of the ion intensities of each unlabeled and labeled peptide pair is then compared to that of other sample(s). A comparison of these ratios provides a readily accessible way to spot even minute differences among samples. In a study of a monoclonal antibody (mAb) spiked with varying amounts of the same antibody bearing point mutations, peptides containing the mutations were readily identified and quantified at concentrations as low as 2% relative to unmodified peptides. The method was robust, reproducible and produced a linear response for every peptide that was monitored. The method was also successfully used to distinguish between two batches of a mAb that were produced in two different cell lines while two batches produced from the same cell line were found to be highly comparable. Finally, the use of the SITRS method in the comparison of two stressed mAb samples enabled the identification of sites susceptible to deamidation and oxidation, as well as their quantitation. The experimental results indicate that use of a SITRS in a peptide mapping experiment with MS detection enables sensitive and quantitative comparability studies of proteins at high resolution. PMID:21654206
Manuilov, Anton V; Radziejewski, Czeslaw H; Lee, David H
2011-01-01
Comparability studies lie at the heart of assessments that evaluate differences amongst manufacturing processes and stability studies of protein therapeutics. Low resolution chromatographic and electrophoretic methods facilitate quantitation, but do not always yield detailed insight into the effect of the manufacturing change or environmental stress. Conversely, mass spectrometry (MS) can provide high resolution information on the molecule, but conventional methods are not very quantitative. This gap can be reconciled by use of a stable isotope-tagged reference standard (SITRS), a version of the analyte protein that is uniformly labeled (13)C6-arginine and (13)C6-lysine. The SITRS serves as an internal control that is trypsin-digested and analyzed by liquid chromatography (LC)-MS with the analyte sample. The ratio of the ion intensities of each unlabeled and labeled peptide pair is then compared to that of other sample(s). A comparison of these ratios provides a readily accessible way to spot even minute differences among samples. In a study of a monoclonal antibody (mAb) spiked with varying amounts of the same antibody bearing point mutations, peptides containing the mutations were readily identified and quantified at concentrations as low as 2% relative to unmodified peptides. The method is robust, reproducible and produced a linear response for every peptide that was monitored. The method was also successfully used to distinguish between two batches of a mAb that were produced in two different cell lines while two batches produced from the same cell line were found to be highly comparable. Finally, the use of the SITRS method in the comparison of two stressed mAb samples enabled the identification of sites susceptible to deamidation and oxidation, as well as their quantitation. The experimental results indicate that use of a SITRS in a peptide mapping experiment with MS detection enables sensitive and quantitative comparability studies of proteins at high resolution.
Estimating the circuit delay of FPGA with a transfer learning method
NASA Astrophysics Data System (ADS)
Cui, Xiuhai; Liu, Datong; Peng, Yu; Peng, Xiyuan
2017-10-01
With the increase of FPGA (Field Programmable Gate Array, FPGA) functionality, FPGA has become an on-chip system platform. Due to increase the complexity of FPGA, estimating the delay of FPGA is a very challenge work. To solve the problems, we propose a transfer learning estimation delay (TLED) method to simplify the delay estimation of different speed grade FPGA. In fact, the same style different speed grade FPGA comes from the same process and layout. The delay has some correlation among different speed grade FPGA. Therefore, one kind of speed grade FPGA is chosen as a basic training sample in this paper. Other training samples of different speed grade can get from the basic training samples through of transfer learning. At the same time, we also select a few target FPGA samples as training samples. A general predictive model is trained by these samples. Thus one kind of estimation model is used to estimate different speed grade FPGA circuit delay. The framework of TRED includes three phases: 1) Building a basic circuit delay library which includes multipliers, adders, shifters, and so on. These circuits are used to train and build the predictive model. 2) By contrasting experiments among different algorithms, the forest random algorithm is selected to train predictive model. 3) The target circuit delay is predicted by the predictive model. The Artix-7, Kintex-7, and Virtex-7 are selected to do experiments. Each of them includes -1, -2, -2l, and -3 different speed grade. The experiments show the delay estimation accuracy score is more than 92% with the TLED method. This result shows that the TLED method is a feasible delay assessment method, especially in the high-level synthesis stage of FPGA tool, which is an efficient and effective delay assessment method.
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
Does On-the-Job Training Improve an Employee's Job Performance?
ERIC Educational Resources Information Center
Duff, Juanita
A study examined the link between on-the-job training (OJT) and job performance in a randomly selected sample of 50 skilled maintenance craftpersons employed by the city of Chicago. The sample was identified from the training sheets signed by 160 employees who participated in OJT in a 1-month period. The majority of the employees agreed with…
Optical Microfibre Based Photonic Components and Their Applications in Label-Free Biosensing
Wang, Pengfei; Bo, Lin; Semenova, Yuliya; Farrell, Gerald; Brambilla, Gilberto
2015-01-01
Optical microfibre photonic components offer a variety of enabling properties, including large evanescent fields, flexibility, configurability, high confinement, robustness and compactness. These unique features have been exploited in a range of applications such as telecommunication, sensing, optical manipulation and high Q resonators. Optical microfibre biosensors, as a class of fibre optic biosensors which rely on small geometries to expose the evanescent field to interact with samples, have been widely investigated. Due to their unique properties, such as fast response, functionalization, strong confinement, configurability, flexibility, compact size, low cost, robustness, ease of miniaturization, large evanescent field and label-free operation, optical microfibres based biosensors seem a promising alternative to traditional immunological methods for biomolecule measurements. Unlabeled DNA and protein targets can be detected by monitoring the changes of various optical transduction mechanisms, such as refractive index, absorption and surface plasmon resonance, since a target molecule is capable of binding to an immobilized optical microfibre. In this review, we critically summarize accomplishments of past optical microfibre label-free biosensors, identify areas for future research and provide a detailed account of the studies conducted to date for biomolecules detection using optical microfibres. PMID:26287252
Internal validity of an anxiety disorder screening instrument across five ethnic groups.
Ritsher, Jennifer Boyd; Struening, Elmer L; Hellman, Fred; Guardino, Mary
2002-08-30
We tested the factor structure of the National Anxiety Disorder Screening Day instrument (n=14860) within five ethnic groups (White, Black, Hispanic, Asian, Native American). Conducted yearly across the US, the screening is meant to detect five common anxiety syndromes. Factor analyses often fail to confirm the validity of assessment tools' structures, and this is especially likely for minority ethnic groups. If symptoms cluster differently across ethnic groups, criteria for conventional DSM-IV disorders are less likely to be met, leaving significant distress unlabeled and under-detected in minority groups. Exploratory and confirmatory factor analyses established that the items clustered into the six expected factors (one for each disorder plus agoraphobia). This six-factor model fit the data very well for Whites and not significantly worse for each other group. However, small areas of the model did not appear to fit as well for some groups. After taking these areas into account, the data still clearly suggest more prevalent PTSD symptoms in the Black, Hispanic and Native American groups in our sample. Additional studies are warranted to examine the model's external validity, generalizability to more culturally distinct groups, and overlap with other culture-specific syndromes.
Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana
2017-10-17
Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.
Development of allele-specific multiplex PCR to determine the length of poly-T in intron 8 of CFTR
Prada, Anne E.
2014-01-01
Cystic fibrosis transmembrane conductance regulator (CFTR) gene mutation analysis has been implemented for Cystic Fibrosis (CF) carrier screening, and molecular diagnosis of CF and congenital bilateral absence of the vas deferens (CBAVD). Although poly-T allele analysis in intron 8 of CFTR is required when a patient is positive for R117H, it is not recommended for routine carrier screening. Therefore, commercial kits for CFTR mutation analysis were designed either to mask the poly-T allele results, unless a patient is R117H positive, or to have the poly-T analysis as a standalone reflex test using the same commercial platform. There are other standalone assays developed to detect poly-T alleles, such as heteroduplex analysis, High Resolution Melting (HRM) curve analysis, allele-specific PCR (AS-PCR) and Sanger sequencing. In this report, we developed a simple and easy-to-implement multiplex AS-PCR assay using unlabeled standard length primers, which can be used as a reflex or standalone test for CFTR poly-T track analysis. Out of 115 human gDNA samples tested, results from our new AS-PCR matched to the previous known poly-T results or results from Sanger sequencing. PMID:25071991
Optical Microfibre Based Photonic Components and Their Applications in Label-Free Biosensing.
Wang, Pengfei; Bo, Lin; Semenova, Yuliya; Farrell, Gerald; Brambilla, Gilberto
2015-07-22
Optical microfibre photonic components offer a variety of enabling properties, including large evanescent fields, flexibility, configurability, high confinement, robustness and compactness. These unique features have been exploited in a range of applications such as telecommunication, sensing, optical manipulation and high Q resonators. Optical microfibre biosensors, as a class of fibre optic biosensors which rely on small geometries to expose the evanescent field to interact with samples, have been widely investigated. Due to their unique properties, such as fast response, functionalization, strong confinement, configurability, flexibility, compact size, low cost, robustness, ease of miniaturization, large evanescent field and label-free operation, optical microfibres based biosensors seem a promising alternative to traditional immunological methods for biomolecule measurements. Unlabeled DNA and protein targets can be detected by monitoring the changes of various optical transduction mechanisms, such as refractive index, absorption and surface plasmon resonance, since a target molecule is capable of binding to an immobilized optical microfibre. In this review, we critically summarize accomplishments of past optical microfibre label-free biosensors, identify areas for future research and provide a detailed account of the studies conducted to date for biomolecules detection using optical microfibres.
Three-dimensional nanoscale imaging by plasmonic Brownian microscopy
NASA Astrophysics Data System (ADS)
Labno, Anna; Gladden, Christopher; Kim, Jeongmin; Lu, Dylan; Yin, Xiaobo; Wang, Yuan; Liu, Zhaowei; Zhang, Xiang
2017-12-01
Three-dimensional (3D) imaging at the nanoscale is a key to understanding of nanomaterials and complex systems. While scanning probe microscopy (SPM) has been the workhorse of nanoscale metrology, its slow scanning speed by a single probe tip can limit the application of SPM to wide-field imaging of 3D complex nanostructures. Both electron microscopy and optical tomography allow 3D imaging, but are limited to the use in vacuum environment due to electron scattering and to optical resolution in micron scales, respectively. Here we demonstrate plasmonic Brownian microscopy (PBM) as a way to improve the imaging speed of SPM. Unlike photonic force microscopy where a single trapped particle is used for a serial scanning, PBM utilizes a massive number of plasmonic nanoparticles (NPs) under Brownian diffusion in solution to scan in parallel around the unlabeled sample object. The motion of NPs under an evanescent field is three-dimensionally localized to reconstruct the super-resolution topology of 3D dielectric objects. Our method allows high throughput imaging of complex 3D structures over a large field of view, even with internal structures such as cavities that cannot be accessed by conventional mechanical tips in SPM.
Ford, C H; Tsaltas, G C; Osborne, P A; Addetia, K
1996-03-01
A flow cytometric method of studying the internalization of a monoclonal antibody (Mab) directed against carcinoembryonic antigen (CEA) has been compared with Western blotting, using three human colonic cancer cell lines which express varying amounts of the target antigen. Cell samples incubated for increasing time intervals with fluoresceinated or unlabelled Mab were analyzed using flow cytometry or polyacrylamide gel electrophoresis and Western blotting. SDS/PAGE analysis of cytosolic and membrane components of solubilized cells from the cell lines provided evidence of non-degraded internalized anti-CEA Mab throughout seven half hour intervals, starting at 5 min. Internalized anti-CEA was detected in the case of high CEA expressing cell lines (LS174T, SKCO1). Very similar results were obtained with an anti-fluorescein flow cytometric assay. Given that these two methods consistently provided comparable results, use of flow cytometry for the detection of internalized antibody is suggested as a rapid alternative to most currently used methods for assessing antibody internalization. The question of the endocytic route followed by CEA-anti-CEA complexes was addressed by using hypertonic medium to block clathrin mediated endocytosis.
NASA Astrophysics Data System (ADS)
Lee, Moosung; Lee, Eeksung; Jung, JaeHwang; Yu, Hyeonseung; Kim, Kyoohyun; Yoon, Jonghee; Lee, Shinhwa; Jeong, Yong; Park, YongKeun
2017-02-01
Imaging brain tissues is an essential part of neuroscience because understanding brain structure provides relevant information about brain functions and alterations associated with diseases. Magnetic resonance imaging and positron emission tomography exemplify conventional brain imaging tools, but these techniques suffer from low spatial resolution around 100 μm. As a complementary method, histopathology has been utilized with the development of optical microscopy. The traditional method provides the structural information about biological tissues to cellular scales, but relies on labor-intensive staining procedures. With the advances of illumination sources, label-free imaging techniques based on nonlinear interactions, such as multiphoton excitations and Raman scattering, have been applied to molecule-specific histopathology. Nevertheless, these techniques provide limited qualitative information and require a pulsed laser, which is difficult to use for pathologists with no laser training. Here, we present a label-free optical imaging of mouse brain tissues for addressing structural alteration in Alzheimer's disease. To achieve the mesoscopic, unlabeled tissue images with high contrast and sub-micrometer lateral resolution, we employed holographic microscopy and an automated scanning platform. From the acquired hologram of the brain tissues, we could retrieve scattering coefficients and anisotropies according to the modified scattering-phase theorem. This label-free imaging technique enabled direct access to structural information throughout the tissues with a sub-micrometer lateral resolution and presented a unique means to investigate the structural changes in the optical properties of biological tissues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos
The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variablemore » objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.« less
Clustering-based Feature Learning on Variable Stars
NASA Astrophysics Data System (ADS)
Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos
2016-04-01
The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variable objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.
Yang, Yang; Saleemi, Imran; Shah, Mubarak
2013-07-01
This paper proposes a novel representation of articulated human actions and gestures and facial expressions. The main goals of the proposed approach are: 1) to enable recognition using very few examples, i.e., one or k-shot learning, and 2) meaningful organization of unlabeled datasets by unsupervised clustering. Our proposed representation is obtained by automatically discovering high-level subactions or motion primitives, by hierarchical clustering of observed optical flow in four-dimensional, spatial, and motion flow space. The completely unsupervised proposed method, in contrast to state-of-the-art representations like bag of video words, provides a meaningful representation conducive to visual interpretation and textual labeling. Each primitive action depicts an atomic subaction, like directional motion of limb or torso, and is represented by a mixture of four-dimensional Gaussian distributions. For one--shot and k-shot learning, the sequence of primitive labels discovered in a test video are labeled using KL divergence, and can then be represented as a string and matched against similar strings of training videos. The same sequence can also be collapsed into a histogram of primitives or be used to learn a Hidden Markov model to represent classes. We have performed extensive experiments on recognition by one and k-shot learning as well as unsupervised action clustering on six human actions and gesture datasets, a composite dataset, and a database of facial expressions. These experiments confirm the validity and discriminative nature of the proposed representation.
Kraschnewski, Jennifer L; Sciamanna, Christopher N; Ciccolo, Joseph T; Rovniak, Liza S; Lehman, Erik B; Candotti, Carolina; Ballentine, Noel H
2014-09-01
To determine the association between meeting strength training guidelines (≥2 times per week) and the presence of functional limitations among older adults. This cross-sectional study used data from older adult participants (N=6763) of the National Health Interview Survey conducted in 2011 in the United States. Overall, 16.1% of older adults reported meeting strength training guidelines. For each of nine functional limitations, those with the limitation were less likely to meet strength training recommendations than those without the limitation. For example, 20.0% of those who reported no difficulty walking one-quarter mile met strength training guidelines, versus only 10.1% of those who reported difficulty (p<.001). In sum, 21.7% of those with no limitations (33.7% of sample) met strength training guidelines, versus only 15.9% of those reporting 1-4 limitations (38.5% of sample) and 9.8% of those reporting 5-9 limitations (27.8% of sample) (p<.001). Strength training is uncommon among older adults and even less common among those who need it the most. The potential for strength training to improve the public's health is therefore substantial, as those who have the most to gain from strength training participate the least. Copyright © 2014 Elsevier Inc. All rights reserved.
[Assessment of laparoscopic training based on eye tracker and electroencephalograph].
Liu, Yun; Wang, Shuyi; Zhang, Yangun; Xu, Mingzhe; Ye, Shasha; Wang, Peng
2017-02-01
The aim of this study is to evaluate the effect of laparoscopic simulation training with different attention. Attention was appraised using the sample entropy and θ/β value, which were calculated according to electroencephalograph(EEG) signal collected with Brain Link. The effect of laparoscopic simulation training was evaluated using the completion time, error number and fixation number, which were calculated according to eye movement signal collected with Tobii eye tracker. Twenty volunteers were recruited in this study. Those with the sample entropy lower than0.77 were classified into group A and those higher than 0.77 into group B. The results showed that the sample entropy of group A was lower than that of group B, and fluctuations of A were more steady. However, the sample entropy of group B showed steady fluctuations in the first five trainings, and then demonstrated relatively dramatic fluctuates in the later five trainings. Compared with that of group B, the θ/β value of group A was smaller and shows steady fluctuations. Group A has a shorter completion time, less errors and faster decrease of fixation number. Therefore, this study reached the following conclusion that the attention of the trainees would affect the training effect. Members in group A, who had a higher attention were more efficient and faster training. For those in group B, although their training skills have been improved, they needed a longer time to reach a plateau.
NASA Astrophysics Data System (ADS)
Freeman, P. E.; Izbicki, R.; Lee, A. B.
2017-07-01
Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.
Darville, Nicolas; Saarinen, Jukka; Isomäki, Antti; Khriachtchev, Leonid; Cleeren, Dirk; Sterkens, Patrick; van Heerden, Marjolein; Annaert, Pieter; Peltonen, Leena; Santos, Hélder A; Strachan, Clare J; Van den Mooter, Guy
2015-10-01
Drug nano-/microcrystals are being used for sustained parenteral drug release, but safety and efficacy concerns persist as the knowledge of the in vivo fate of long-living particulates is limited. There is a need for techniques enabling the visualization of drug nano-/microcrystals in biological matrices. The aim of this work was to explore the potential of coherent anti-Stokes Raman scattering (CARS) microscopy, supported by other non-linear optical methods, as an emerging tool for the investigation of cellular and tissue interactions of unlabeled and non-fluorescent nano-/microcrystals. Raman and CARS spectra of the prodrug paliperidone palmitate (PP), paliperidone (PAL) and several suspension stabilizers were recorded. PP nano-/microcrystals were incubated with RAW 264.7 macrophages in vitro and their cellular disposition was investigated using a fully-integrated multimodal non-linear optical imaging platform. Suitable anti-Stokes shifts (CH stretching) were identified for selective CARS imaging. CARS microscopy was successfully applied for the selective three-dimensional, non-perturbative and real-time imaging of unlabeled PP nano-/microcrystals having dimensions larger than the optical lateral resolution of approximately 400nm, in relation to the cellular framework in cell cultures and ex vivo in histological sections. In conclusion, CARS microscopy enables the non-invasive and label-free imaging of (sub)micron-sized (pro-)drug crystals in complex biological matrices and could provide vital information on poorly understood nano-/microcrystal-cell interactions in future. Copyright © 2015 Elsevier B.V. All rights reserved.
Guo, Dong; Mulder-Krieger, Thea; IJzerman, Adriaan P; Heitman, Laura H
2012-01-01
BACKGROUND AND PURPOSE The adenosine A2A receptor belongs to the superfamily of GPCRs and is a promising therapeutic target. Traditionally, the discovery of novel agents for the A2A receptor has been guided by their affinity for the receptor. This parameter is determined under equilibrium conditions, largely ignoring the kinetic aspects of the ligand-receptor interaction. The aim of this study was to assess the binding kinetics of A2A receptor agonists and explore a possible relationship with their functional efficacy. EXPERIMENTAL APPROACH We set up, validated and optimized a kinetic radioligand binding assay (a so-called competition association assay) at the A2A receptor from which the binding kinetics of unlabelled ligands were determined. Subsequently, functional efficacies of A2A receptor agonists were determined in two different assays: a novel label-free impedance-based assay and a more traditional cAMP determination. KEY RESULTS A simplified competition association assay yielded an accurate determination of the association and dissociation rates of unlabelled A2A receptor ligands at their receptor. A correlation was observed between the receptor residence time of A2A receptor agonists and their intrinsic efficacies in both functional assays. The affinity of A2A receptor agonists was not correlated to their functional efficacy. CONCLUSIONS AND IMPLICATIONS This study indicates that the molecular basis of different agonist efficacies at the A2A receptor lies within their different residence times at this receptor. PMID:22324512
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landefeld, T.D.; Byrne, M.D.; Campbell, K.L.
1981-12-01
The alpha- and beta-subunits of hCG were radioiodinated and recombined with unlabeled complementary subunits. The resultant recombined hormones, selectively labeled in either the alpha- or beta-subunit, were separated from unrecombined subunit by sodium dodecyl sulfate-polyacrylamide gel electrophoresis, extracted with Triton X-100, and characterized by binding analysis. The estimates of maximum binding (active fraction) of the two resultant selectively labeled, recombined hCG preparations, determined with excess receptor were 0.41 and 0.59. These values are similar to those obtained when hCG is labeled as an intact molecule. The specific activities of the recombined preparations were estimated by four different methods, and themore » resulting values were used in combination with the active fraction estimates to determine the concentrations of active free and bound hormone. Binding analyses were run using varying concentrations of both labeled and unlabeled hormone. Estimates of the equilibrium dissociation binding constant (Kd) and receptor capacity were calculated in three different ways. The mean estimates of capacity (52.6 and 52.7 fmol/mg tissue) and Kd (66.6 and 65.7 pM) for the two preparations were indistinguishable. Additionally, these values were similar to values reported previously for hCG radioiodinated as an intact molecule. The availability of well characterized, selectively labeled hCG preparations provides new tools for studying the mechanism of action and the target cell processing of the subunits of this hormone.« less
Lo, Kai-Yin; Sun, Yung-Shin; Landry, James P.; Zhu, Xiangdong; Deng, Wenbin
2012-01-01
Conventional fluorescent microscopy is routinely used to detect cell surface markers through fluorophore-conjugated antibodies. However, fluorophore-conjugation of antibodies alters binding properties such as strength and specificity of the antibody in ways often uncharacterized. The binding between antibody and antigen might not be in the native situation after such conjugation. Here, we present an oblique-incidence reflectivity difference (OI-RD) microscope as an effective method for label-free, real-time detection of cell surface markers and apply such a technique to analysis of Stage-Specific Embryonic Antigen 1 (SSEA1) on stem cells. Mouse stem cells express SSEA1 on their surfaces and the level of SSEA1 decreases when the cells start to differentiate. In this study, we immobilized mouse stem cells and non-stem cells (control) on a glass surface as a microarray and reacted the cell microarray with unlabeled SSEA1 antibodies. By monitoring the reaction with an OI-RD microscope in real time, we confirmed that the SSEA1 antibodies only bind to the surface of the stem cells while not to the surface of non-stem cells. From the binding curves, we determined the equilibrium dissociation constant (Kd) of the antibody with the SSEA1 markers on the stem cell surface. The results concluded that OI-RD microscope can be used to detect binding affinities between cell surface markers and unlabeled antibodies bound to the cells. The information could be another indicator to determine the cell stages. PMID:21781038
Binding of [35S]saccharin to a protein fraction of rat tongue epithelia.
Shimazaki, K; Sato, M; Takegami, T
1981-11-05
The binding of [35S]saccharin to ammonium sulfate fractions from homogenates of rat tongue epithelia was measured by equilibrium dialysis. The 40--60% saturated ammonium sulfate fraction from the buffer-soluble fraction had the highest saccharin-binding activity. Binding of [35S]saccharin to the 40--60% ammonium sulfate fraction was inhibited by unlabeled saccharin sodium salt. The inhibition increased with increasing unlabeled saccharin concentration and was nearly complete above 10 mM. [35S]Saccharin binding to the 40--60% ammonium sulfate fraction extracted from the tongue epithelia was inhibited by glucose, lactose and sucrose, while binding to similar fractions from tongue muscle was not affected by these sugars. The inhibition of binding of labeled saccharin to the epithelial fraction increased with increasing glucose concentrations. About 35% of the binding was inhibited by 1 M glucose. No significant difference in the amount of inhibition was seen among the three sugars at 0.1 M. The 40--60% ammonium sulfate fraction from tongue epithelium devoid of taste buds bound much less [35S]saccharin than did a similar fraction from epithelium with taste buds. Binding of [35S]saccharin by the preparation from epithelium devoid of taste buds was not inhibited by glucose. The results provide evidence that the 40--60% ammonium sulfate fraction from tongue epithelia with taste buds contains a protein which binds saccharin and sugars. We hypothesize that it is a sweet taste receptor protein.
Yoshikawa, Miho; Zhang, Ming; Kurisu, Futoshi; Toyota, Koki
2017-01-01
Most bioremediation studies on volatile organic compounds (VOCs) have focused on a single contaminant or its derived compounds and degraders have been identified under single contaminant conditions. Bioremediation of multiple contaminants remains a challenging issue. To identify a bacterial consortium that degrades multiple VOCs (dichloromethane (DCM), benzene, and toluene), we applied DNA-stable isotope probing. For individual tests, we combined a 13 C-labeled VOC with other two unlabeled VOCs, and prepared three unlabeled VOCs as a reference. Over 11 days, DNA was periodically extracted from the consortia, and the bacterial community was evaluated by next-generation sequencing of bacterial 16S rRNA gene amplicons. Density gradient fractions of the DNA extracts were amplified by universal bacterial primers for the 16S rRNA gene sequences, and the amplicons were analyzed by terminal restriction fragment length polymorphism (T-RFLP) using restriction enzymes: Hha I and Msp I. The T-RFLP fragments were identified by 16S rRNA gene cloning and sequencing. Under all test conditions, the consortia were dominated by Rhodanobacter , Bradyrhizobium / Afipia , Rhizobium , and Hyphomicrobium . DNA derived from Hyphomicrobium and Propioniferax shifted toward heavier fractions under the condition added with 13 C-DCM and 13 C-benzene, respectively, compared with the reference, but no shifts were induced by 13 C-toluene addition. This implies that Hyphomicrobium and Propioniferax were the main DCM and benzene degraders, respectively, under the coexisting condition. The known benzene degrader Pseudomonas sp. was present but not actively involved in the degradation.
Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary
2014-11-01
Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Xenobiotic interaction with and alteration of channel catfish estrogen receptor.
Nimrod, A C; Benson, W H
1997-12-01
In teleostean in vivo studies, the vitellogenin response to environmental estrogens is not completely predicted by mammalian literature. One possible explanation for differences is heterogeneity of the estrogen receptor (ER) structure between species. Therefore, ER from channel catfish (Ictalurus punctatus) hepatic tissue was characterized by binding affinity for several compounds. Affinity was indirectly measured as potency of the chemical for inhibiting binding of radiolabeled estradiol (E2) to specific binding sites. The order of potency among therapeutic chemicals was ethinylestradiol > unlabeled E2 = diethylstilbestrol > mestranol > tamoxifen > testosterone. Unlabeled E2 had an IC50 of 2.2 nM. Several environmentally relevant chemicals were evaluated in a similar manner and the order of potency established was the o-demethylated metabolite of methoxychlor (MXC) > nonylphenol (NP) > chlordecone > MXC > o,p'-DDT > o,p'-DDE > beta-hexachlorocyclohexane. Demethylated MXC had an IC50 1000-fold greater than that of E2. Of the most potent inhibitors, NP appeared to be a competitive inhibitor for the same binding site as E2, while o-demethylated MXC had a more complex interaction with the receptor protein. ER from nonvitellogenic females was determined to have a Kd value of 1.0 to 1.3 nM. Because E2 has been reported to up-regulate teleostean ER, the hepatic ER population following in vivo xenobiotic exposure was assessed. NP significantly increased ER per milligram hepatic protein almost to the same extent as E2, but did not increase Kd to the same extent as E2.
Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.
Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L
2012-01-01
In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cocuron, Jean-Christophe; Tsogtbaatar, Enkhtuul; Alonso, Ana P.
Accurate assessment of mass isotopomer distributions (MIDs) of intracellular metabolites, such as free amino acids (AAs), is crucial for quantifying in vivo fluxes. To date, the majority of studies that measured AA MIDs have relied on the analysis of proteinogenic rather than free AAs by: i) GC–MS, which involved cumbersome process of derivatization, or ii) NMR, which requires large quantities of biological sample. In this work, the development and validation of a high-throughput LC–MS/MS method allowing the quantification of the levels and labeling of free AAs is described. Sensitivity in the order of the femtomol was achieved using multiple reactionmore » monitoring mode (MRM). The MIDs of all free AAs were assessed without the need of derivatization, and were validated (except for Trp) on a mixture of unlabeled AA standards. Finally, this method was applied to the determination of the 13C-labeling abundance in free AAs extracted from maize embryos cultured with 13C-glutamine or 13C-glucose. Although Cys was below the limit of detection in these biological samples, the MIDs of a total of 18 free AAs were successfully determined. Due to the increased application of tandem mass spectrometry for 13C-Metabolic Flux Analysis, this novel method will enable the assessment of more complete and accurate labeling information of intracellular AAs, and therefore a better definition of the fluxes.« less
Cocuron, Jean-Christophe; Tsogtbaatar, Enkhtuul; Alonso, Ana P.
2017-02-16
Accurate assessment of mass isotopomer distributions (MIDs) of intracellular metabolites, such as free amino acids (AAs), is crucial for quantifying in vivo fluxes. To date, the majority of studies that measured AA MIDs have relied on the analysis of proteinogenic rather than free AAs by: i) GC–MS, which involved cumbersome process of derivatization, or ii) NMR, which requires large quantities of biological sample. In this work, the development and validation of a high-throughput LC–MS/MS method allowing the quantification of the levels and labeling of free AAs is described. Sensitivity in the order of the femtomol was achieved using multiple reactionmore » monitoring mode (MRM). The MIDs of all free AAs were assessed without the need of derivatization, and were validated (except for Trp) on a mixture of unlabeled AA standards. Finally, this method was applied to the determination of the 13C-labeling abundance in free AAs extracted from maize embryos cultured with 13C-glutamine or 13C-glucose. Although Cys was below the limit of detection in these biological samples, the MIDs of a total of 18 free AAs were successfully determined. Due to the increased application of tandem mass spectrometry for 13C-Metabolic Flux Analysis, this novel method will enable the assessment of more complete and accurate labeling information of intracellular AAs, and therefore a better definition of the fluxes.« less
White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E
2018-05-01
Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.
Integrative gene network construction to analyze cancer recurrence using semi-supervised learning.
Park, Chihyun; Ahn, Jaegyoon; Kim, Hyunjin; Park, Sanghyun
2014-01-01
The prognosis of cancer recurrence is an important research area in bioinformatics and is challenging due to the small sample sizes compared to the vast number of genes. There have been several attempts to predict cancer recurrence. Most studies employed a supervised approach, which uses only a few labeled samples. Semi-supervised learning can be a great alternative to solve this problem. There have been few attempts based on manifold assumptions to reveal the detailed roles of identified cancer genes in recurrence. In order to predict cancer recurrence, we proposed a novel semi-supervised learning algorithm based on a graph regularization approach. We transformed the gene expression data into a graph structure for semi-supervised learning and integrated protein interaction data with the gene expression data to select functionally-related gene pairs. Then, we predicted the recurrence of cancer by applying a regularization approach to the constructed graph containing both labeled and unlabeled nodes. The average improvement rate of accuracy for three different cancer datasets was 24.9% compared to existing supervised and semi-supervised methods. We performed functional enrichment on the gene networks used for learning. We identified that those gene networks are significantly associated with cancer-recurrence-related biological functions. Our algorithm was developed with standard C++ and is available in Linux and MS Windows formats in the STL library. The executable program is freely available at: http://embio.yonsei.ac.kr/~Park/ssl.php.
Patterson, Fiona; Cousans, Fran; Coyne, Iain; Jones, Jo; Macleod, Sheona; Zibarras, Lara
2017-05-15
Treating patients is complex, and research shows that there are differences in cognitive resources between physicians who experience difficulties, and those who do not. It is possible that differences in some cognitive resources could explain the difficulties faced by some physicians. In this study, we explore differences in cognitive resources between different groups of physicians (that is, between native (UK) physicians and International Medical Graduates (IMG); those who continue with training versus those who were subsequently removed from the training programme); and also between physicians experiencing difficulties compared with the general population. A secondary evaluation was conducted on an anonymised dataset provided by the East Midlands Professional Support Unit (PSU). One hundred and twenty one postgraduate trainee physicians took part in an Educational Psychology assessment through PSU. Referrals to the PSU were mainly on the basis of problems with exam progression and difficulties in communication skills, organisation and confidence. Cognitive resources were assessed using the Wechsler Adult Intelligence Scale (WAIS-IV). Physicians were categorised into three PSU outcomes: 'Continued in training', 'Removed from training' and 'Active' (currently accessing the PSU). Using a one-sample Z test, we compared the referred physician sample to a UK general population sample on the WAIS-IV and found the referred sample significantly higher in Verbal Comprehension (VCI; z = 8.78) and significantly lower in Working Memory (WMI; z = -4.59). In addition, the native sample were significantly higher in Verbal Comprehension than the UK general population sample (VCI; native physicians: z = 9.95, p < .001, d = 1.25), whilst there was a lesser effect for the difference between the IMG sample and the UK general population (z = 2.13, p = .03, d = 0.29). Findings also showed a significant difference in VCI scores between those physicians who were 'Removed from training' and those who 'Continued in training'. Our results suggest it is important to understand the cognitive resources of physicians to provide a more focussed explanation of those who experience difficulties in training. This will help to implement more targeted interventions to help physicians develop compensatory strategies.
CTEPP STANDARD OPERATING PROCEDURE FOR CONDUCTING STAFF AND PARTICIPANT TRAINING (SOP-2.27)
This SOP describes the method to train project staff and participants to collect various field samples and questionnaire data for the study. The training plan consists of two separate components: project staff training and participant training. Before project activities begin,...
ERIC Educational Resources Information Center
Oliveira, Marileide; Goyos, Celso; Pear, Joseph
2012-01-01
Matching-to-sample (MTS) training consists of presenting a stimulus as a sample followed by stimuli called comparisons from which a subject makes a choice. This study presents results of a pilot investigation comparing two packages for teaching university students to conduct MTS training. Two groups--control and experimental--with 2 participants…
Integrating conventional and inverse representation for face recognition.
Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David
2014-10-01
Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.
ERIC Educational Resources Information Center
Pascarella, Christina Bechle
2012-01-01
This study examined play therapy training across the nation among school psychology, social work, and school counseling graduate training programs. It also compared current training to previous training among school psychology and school counseling programs. A random sample of trainers was selected from lists of graduate programs provided by…
Using complex auditory-visual samples to produce emergent relations in children with autism.
Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P
2010-03-01
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
New method for detection of gastric cancer by hyperspectral imaging: a pilot study
NASA Astrophysics Data System (ADS)
Kiyotoki, Shu; Nishikawa, Jun; Okamoto, Takeshi; Hamabe, Kouichi; Saito, Mari; Goto, Atsushi; Fujita, Yusuke; Hamamoto, Yoshihiko; Takeuchi, Yusuke; Satori, Shin; Sakaida, Isao
2013-02-01
We developed a new, easy, and objective method to detect gastric cancer using hyperspectral imaging (HSI) technology combining spectroscopy and imaging A total of 16 gastroduodenal tumors removed by endoscopic resection or surgery from 14 patients at Yamaguchi University Hospital, Japan, were recorded using a hyperspectral camera (HSC) equipped with HSI technology Corrected spectral reflectance was obtained from 10 samples of normal mucosa and 10 samples of tumors for each case The 16 cases were divided into eight training cases (160 training samples) and eight test cases (160 test samples) We established a diagnostic algorithm with training samples and evaluated it with test samples Diagnostic capability of the algorithm for each tumor was validated, and enhancement of tumors by image processing using the HSC was evaluated The diagnostic algorithm used the 726-nm wavelength, with a cutoff point established from training samples The sensitivity, specificity, and accuracy rates of the algorithm's diagnostic capability in the test samples were 78.8% (63/80), 92.5% (74/80), and 85.6% (137/160), respectively Tumors in HSC images of 13 (81.3%) cases were well enhanced by image processing Differences in spectral reflectance between tumors and normal mucosa suggested that tumors can be clearly distinguished from background mucosa with HSI technology.
Li, Xiangpeng; Brooks, Jessica C; Hu, Juan; Ford, Katarena I; Easley, Christopher J
2017-01-17
A fully automated, 16-channel microfluidic input/output multiplexer (μMUX) has been developed for interfacing to primary cells and to improve understanding of the dynamics of endocrine tissue function. The device utilizes pressure driven push-up valves for precise manipulation of nutrient input and hormone output dynamics, allowing time resolved interrogation of the cells. The ability to alternate any of the 16 channels from input to output, and vice versa, provides for high experimental flexibility without the need to alter microchannel designs. 3D-printed interface templates were custom designed to sculpt the above-channel polydimethylsiloxane (PDMS) in microdevices, creating millimeter scale reservoirs and confinement chambers to interface primary murine islets and adipose tissue explants to the μMUX sampling channels. This μMUX device and control system was first programmed for dynamic studies of pancreatic islet function to collect ∼90 minute insulin secretion profiles from groups of ∼10 islets. The automated system was also operated in temporal stimulation and cell imaging mode. Adipose tissue explants were exposed to a temporal mimic of post-prandial insulin and glucose levels, while simultaneous switching between labeled and unlabeled free fatty acid permitted fluorescent imaging of fatty acid uptake dynamics in real time over a ∼2.5 hour period. Application with varying stimulation and sampling modes on multiple murine tissue types highlights the inherent flexibility of this novel, 3D-templated μMUX device. The tissue culture reservoirs and μMUX control components presented herein should be adaptable as individual modules in other microfluidic systems, such as organ-on-a-chip devices, and should be translatable to different tissues such as liver, heart, skeletal muscle, and others.
Scior, Katrina; Hamid, Aseel; Mahfoudhi, Abdessatar; Abdalla, Fauzia
2013-11-01
Evidence on lay beliefs and stigma associated with intellectual disability in an Arab context is almost non-existent. This study examined awareness of intellectual disability, causal and intervention beliefs and social distance in Kuwait. These were compared to a UK sample to examine differences in lay conceptions across cultures. 537 university students in Kuwait and 571 students in the UK completed a web-based survey asking them to respond to a diagnostically unlabelled vignette of a man presenting with symptoms of mild intellectual disability. They rated their agreement with 22 causal items as possible causes for the difficulties depicted in the vignette, the perceived helpfulness of 22 interventions, and four social distance items using a 7-point Likert scale. Only 8% of Kuwait students, yet 33% of UK students identified possible intellectual disability in the vignette. Medium to large differences between the two samples were observed on seven of the causal items, and 10 of the intervention items. Against predictions, social distance did not differ. Causal beliefs mediated the relationship between recognition of intellectual disability and social distance, but their mediating role differed by sample. The findings are discussed in relation to cultural practices and values, and in relation to attribution theory. In view of the apparent positive effect of awareness of the symptoms of intellectual disability on social distance, both directly and through the mediating effects of causal beliefs, promoting increased awareness of intellectual disability and inclusive practices should be a priority, particularly in countries such as Kuwait where it appears to be low. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zibara, Kazem; Zein, Nabil El; Sabra, Mirna; Hneino, Mohammad; Harati, Hayat; Mohamed, Wael; Kobeissy, Firas H.; Kassem, Nouhad
2017-01-01
Thyroxine (T4) enters the brain either directly across the blood–brain barrier (BBB) or indirectly via the choroid plexus (CP), which forms the blood–cerebrospinal fluid barrier (B-CSF-B). In this study, using isolated perfused CP of the sheep by single-circulation paired tracer and steady-state techniques, T4 transport mechanisms from blood into lateral ventricle CP has been characterized as the first step in the transfer across the B-CSF-B. After removal of sheep brain, the CPs were perfused with 125I-T4 and 14C-mannitol. Unlabeled T4 was applied during single tracer technique to assess the mode of maximum uptake (Umax) and the net uptake (Unet) on the blood side of the CP. On the other hand, in order to characterize T4 protein transporters, steady-state extraction of 125I-T4 was measured in presence of different inhibitors such as probenecid, verapamil, BCH, or indomethacin. Increasing the concentration of unlabeled-T4 resulted in a significant reduction in Umax%, which was reflected by a complete inhibition of T4 uptake into CP. In fact, the obtained Unet% decreased as the concentration of unlabeled-T4 increased. The addition of probenecid caused a significant inhibition of T4 transport, in comparison to control, reflecting the presence of a carrier mediated process at the basolateral side of the CP and the involvement of multidrug resistance-associated proteins (MRPs: MRP1 and MRP4) and organic anion transporting polypeptides (Oatp1, Oatp2, and Oatp14). Moreover, verapamil, the P-glycoprotein (P-gp) substrate, resulted in ~34% decrease in the net extraction of T4, indicating that MDR1 contributes to T4 entry into CSF. Finally, inhibition in the net extraction of T4 caused by BCH or indomethacin suggests, respectively, a role for amino acid “L” system and MRP1/Oatp1 in mediating T4 transfer. The presence of a carrier-mediated transport mechanism for cellular uptake on the basolateral membrane of the CP, mainly P-gp and Oatp2, would account for the efficient T4 transport from blood to CSF. The current study highlights a carrier-mediated transport mechanism for T4 movement from blood to brain at the basolateral side of B-CSF-B/CP, as an alternative route to BBB. PMID:28588548
A visual training tool for the Photoload sampling technique
Violet J. Holley; Robert E. Keane
2010-01-01
This visual training aid is designed to provide Photoload users a tool to increase the accuracy of fuel loading estimations when using the Photoload technique. The Photoload Sampling Technique (RMRS-GTR-190) provides fire managers a sampling method for obtaining consistent, accurate, inexpensive, and quick estimates of fuel loading. It is designed to require only one...
ERIC Educational Resources Information Center
Bakar, Ab Rahim; Mohamed, Shamsiah; Hamzah, Ramlah
2013-01-01
This study was performed to identify the employability skills of technical students from the Industrial Training Institutes (ITI) and Indigenous People's Trust Council (MARA) Skills Training Institutes (IKM) in Malaysia. The study sample consisted of 850 final year trainees of IKM and ITI. The sample was chosen by a random sampling procedure from…
Gao, Xiao; Jackson, Todd; Chen, Hong; Liu, Yanmei; Wang, Ruiqiang; Qian, Mingyi; Huang, Xiting
2010-04-01
This nationwide survey of professional training for mental health practitioners (i.e., psychiatrists, psychiatric nurses, clinical psychologists, and the counselors working in industry, prisons, and schools) investigated sociodemographic characteristics, training experiences, and training perceptions of mental health service providers in China. Participants included service providers recruited from hospitals, universities, high/middle schools, private mental health service organizations and counseling centers operated by government, prisons or corporations from 25 provinces and four cities directly under the Central Government in China. In order to obtain a broad and representative sample, stratified multi-stage sampling procedures were utilized. From a total of 2000 questionnaire packets distributed via regular mail, the final sample comprised of 1391 respondents (525 men, 866 women). About 70% of the sample had a bachelor's level education or lower degree, only 36.4% majored in psychology, and nearly 60% were employed part time. Fewer than half of participants were certified and nearly 40% reported no affiliation with any 'professional' association. Training and continuing education programs were reported to be primarily short term and theory-based with limited assessment and follow-up. A high proportion of respondents reported having received no supervision or opportunities for case conferences or consultations. With respect to perceptions of and satisfaction with training, many agreed that training had been very helpful to their work but quality of supervision and the capability of supervisors were common issues of concern. In light of these findings, three general recommendations were made to improve the quality of training among mental health service providers in China. First, increased input from professional organizations of various disciplines involving mental health service provision is needed to guide training and shape policy. Second, universities and colleges should have a more vital role in developing accredited professional training programs. Finally, on-the-job supervision and continuing education should be mandated within discipline-specific training programs. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Zinke, Katharina; Zeintl, Melanie; Rose, Nathan S.; Putzmann, Julia; Pydde, Andrea; Kliegel, Matthias
2014-01-01
Recent studies suggest that working memory training may benefit older adults; however, findings regarding training and transfer effects are mixed. The current study aimed to investigate the effects of a process-based training intervention in a diverse sample of older adults and explored possible moderators of training and transfer effects. For…
Short-Term Effects of Different Loading Schemes in Fitness-Related Resistance Training.
Eifler, Christoph
2016-07-01
Eifler, C. Short-term effects of different loading schemes in fitness-related resistance training. J Strength Cond Res 30(7): 1880-1889, 2016-The purpose of this investigation was to analyze the short-term effects of different loading schemes in fitness-related resistance training and to identify the most effective loading method for advanced recreational athletes. The investigation was designed as a longitudinal field-test study. Two hundred healthy mature subjects with at least 12 months' experience in resistance training were randomized in 4 samples of 50 subjects each. Gender distribution was homogenous in all samples. Training effects were quantified by 10 repetition maximum (10RM) and 1 repetition maximum (1RM) testing (pre-post-test design). Over a period of 6 weeks, a standardized resistance training protocol with 3 training sessions per week was realized. Testing and training included 8 resistance training exercises in a standardized order. The following loading schemes were randomly matched to each sample: constant load (CL) with constant volume of repetitions, increasing load (IL) with decreasing volume of repetitions, decreasing load (DL) with increasing volume of repetitions, daily changing load (DCL), and volume of repetitions. For all loading schemes, significant strength gains (p < 0.001) could be noted for all resistance training exercises and both dependent variables (10RM, 1RM). In all cases, DCL obtained significantly higher strength gains (p < 0.001) than CL, IL, and DL. There were no significant differences in strength gains between CL, IL, and DL. The present data indicate that resistance training following DCL is more effective for advanced recreational athletes than CL, IL, or DL. Considering that DCL is widely unknown in fitness-related resistance training, the present data indicate, there is potential for improving resistance training in commercial fitness clubs.
NASA Astrophysics Data System (ADS)
Guo, Yiqing; Jia, Xiuping; Paull, David
2018-06-01
The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steger, J.L.; Bursey, J.T.; Merrill, R.G.
1999-03-01
This report presents the results of laboratory studies to develop and evaluate a method for the sampling and analysis of phosgene from stationary sources of air emissions using diethylamine (DEA) in toluene as the collection media. The method extracts stack gas from emission sources and stabilizes the reactive gas for subsequent analysis. DEA was evaluated both in a benchtop study and in a laboratory train spiking study. This report includes results for both the benchtop study and the train spiking study. Benchtop studies to evaluate the suitability of DEA for collecting and analyzing phosgene investigated five variables: storage time, DEAmore » concentration, moisture/pH, phosgene concentration, and sample storage temperature. Prototype sampling train studies were performed to determine if the benchtop chemical studies were transferable to a Modified Method 5 sampling train collecting phosgene in the presence of clean air mixed with typical stack gas components. Four conditions, which varied the moisture and phosgene spike were evaluated in triplicate. In addition to research results, the report includes a detailed draft method for sampling and analysis of phosgene from stationary source emissions.« less
1999-08-01
of 11ln-DTPA-hEGF in a compartment and a rate of elimination corresponding to the radioactive decay of the radionuclide, indium-1Il. = A0 /A, where I...on the growth rate of MDA-MB-468 or MCF-7 (1.5 X 104 EGFR/cell) cells was determined following treatment in vitro with 11 In-DTPA- hEGF, unlabelled...the nucleus within 24 hours. Chromatin contained 10% of internalized radioactivity. The growth rate of MDA-MB-468 cells was decreased 3-fold by
CCR 20th anniversary commentary: Radioactive Drones for B-cell lymphoma.
Knox, Susan J; Levy, Ronald
2015-02-01
In a study published in the March 1, 1996, issue of Clinical Cancer Research, Knox and colleagues (1) demonstrated the safety and efficacy of Yttirium-90 ((90)Y)-anti-CD20 monoclonal antibody therapy, as well as the benefit of preinfusion of unlabeled antibody on radiolabeled antibody biodistribution. Subsequent clinical trials with this radiolabeled antibody led to regulatory approval of this treatment for B-cell lymphoma. See related article by Knox et al., Clin Cancer Res 1996;2(3) Mar 1996; 457-70. ©2015 American Association for Cancer Research.
Reiffsteck, A; Dehennin, L; Scholler, R
1982-11-01
Estrone, 2-methoxyestrone and estradiol-17 beta have been definitely identified in seminal plasma of man, bull, boar and stallion by high resolution gas chromatography associated with selective monitoring of characteristic ions of suitable derivatives. Quantitative estimations were performed by isotope dilution with deuterated analogues and by monitoring molecular ions of trimethylsilyl ethers of labelled and unlabelled compounds. Concentrations of unconjugated and total estrogens are reported together with the statistical evaluation of accuracy and precision.
Photoacoustic microscopy of single cells employing an intensity-modulated diode laser
NASA Astrophysics Data System (ADS)
Langer, Gregor; Buchegger, Bianca; Jacak, Jaroslaw; Dasa, Manoj Kumar; Klar, Thomas A.; Berer, Thomas
2018-02-01
In this work, we employ frequency-domain photoacoustic microscopy to obtain photoacoustic images of labeled and unlabeled cells. The photoacoustic microscope is based on an intensity-modulated diode laser in combination with a focused piezo-composite transducer and allows imaging of labeled cells without severe photo-bleaching. We demonstrate that frequency-domain photoacoustic microscopy realized with a diode laser is capable of recording photoacoustic images of single cells with sub-µm resolution. As examples, we present images of undyed human red blood cells, stained human epithelial cells, and stained yeast cells.
1983-11-23
during their replication in B. subtilis cells. IV. CLONING THE 0.3 GENE OF COLIPHAGE T7 IN SPP1v We are currently investigating the fidelity of synthesis...of the product of the 0.3 gene of coliphage T7 in infected cells of E.. g by determining the frequency of misincorporation of S-cysteine into this...using a sealed hydrolysis chamber. BC1 was evaporated under vacuum and the hydrolysate resuspended in water . Unlabeled cysteine (12 ug) and methionine
New trends and applications in carboxylation for isotope chemistry.
Bragg, Ryan A; Sardana, Malvika; Artelsmair, Markus; Elmore, Charles S
2018-05-08
Carboxylations are an important method for the incorporation of isotopically labeled 14 CO 2 into molecules. This manuscript will review labeled carboxylations since 2010 and will present a perspective on the potential of recent unlabeled methodology for labeled carboxylations. The perspective portion of the manuscript is broken into 3 major sections based on product type, arylcarboxylic acids, benzylcarboxylic acids, and alkyl carboxylic acids, and each of those sections is further subdivided by substrate. © 2018 AstraZeneca. Journal of Labelled Compounds and Radiopharmaceuticals Published by John Wiley & Sons, Ltd.