Sample records for large training set

  1. LVQ and backpropagation neural networks applied to NASA SSME data

    NASA Technical Reports Server (NTRS)

    Doniere, Timothy F.; Dhawan, Atam P.

    1993-01-01

    Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.

  2. Training set extension for SVM ensemble in P300-speller with familiar face paradigm.

    PubMed

    Li, Qi; Shi, Kaiyang; Gao, Ning; Li, Jian; Bai, Ou

    2018-03-27

    P300-spellers are brain-computer interface (BCI)-based character input systems. Support vector machine (SVM) ensembles are trained with large-scale training sets and used as classifiers in these systems. However, the required large-scale training data necessitate a prolonged collection time for each subject, which results in data collected toward the end of the period being contaminated by the subject's fatigue. This study aimed to develop a method for acquiring more training data based on a collected small training set. A new method was developed in which two corresponding training datasets in two sequences are superposed and averaged to extend the training set. The proposed method was tested offline on a P300-speller with the familiar face paradigm. The SVM ensemble with extended training set achieved 85% classification accuracy for the averaged results of four sequences, and 100% for 11 sequences in the P300-speller. In contrast, the conventional SVM ensemble with non-extended training set achieved only 65% accuracy for four sequences, and 92% for 11 sequences. The SVM ensemble with extended training set achieves higher classification accuracies than the conventional SVM ensemble, which verifies that the proposed method effectively improves the classification performance of BCI P300-spellers, thus enhancing their practicality.

  3. Optimizing support vector machine learning for semi-arid vegetation mapping by using clustering analysis

    NASA Astrophysics Data System (ADS)

    Su, Lihong

    In remote sensing communities, support vector machine (SVM) learning has recently received increasing attention. SVM learning usually requires large memory and enormous amounts of computation time on large training sets. According to SVM algorithms, the SVM classification decision function is fully determined by support vectors, which compose a subset of the training sets. In this regard, a solution to optimize SVM learning is to efficiently reduce training sets. In this paper, a data reduction method based on agglomerative hierarchical clustering is proposed to obtain smaller training sets for SVM learning. Using a multiple angle remote sensing dataset of a semi-arid region, the effectiveness of the proposed method is evaluated by classification experiments with a series of reduced training sets. The experiments show that there is no loss of SVM accuracy when the original training set is reduced to 34% using the proposed approach. Maximum likelihood classification (MLC) also is applied on the reduced training sets. The results show that MLC can also maintain the classification accuracy. This implies that the most informative data instances can be retained by this approach.

  4. A Mine of Information: Can Sports Analytics Provide Wisdom From Your Data?

    PubMed

    Passfield, Louis; Hopker, James G

    2017-08-01

    This paper explores the notion that the availability and analysis of large data sets have the capacity to improve practice and change the nature of science in the sport and exercise setting. The increasing use of data and information technology in sport is giving rise to this change. Web sites hold large data repositories, and the development of wearable technology, mobile phone applications, and related instruments for monitoring physical activity, training, and competition provide large data sets of extensive and detailed measurements. Innovative approaches conceived to more fully exploit these large data sets could provide a basis for more objective evaluation of coaching strategies and new approaches to how science is conducted. An emerging discipline, sports analytics, could help overcome some of the challenges involved in obtaining knowledge and wisdom from these large data sets. Examples of where large data sets have been analyzed, to evaluate the career development of elite cyclists and to characterize and optimize the training load of well-trained runners, are discussed. Careful verification of large data sets is time consuming and imperative before useful conclusions can be drawn. Consequently, it is recommended that prospective studies be preferred over retrospective analyses of data. It is concluded that rigorous analysis of large data sets could enhance our knowledge in the sport and exercise sciences, inform competitive strategies, and allow innovative new research and findings.

  5. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    PubMed

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  6. MATE: Machine Learning for Adaptive Calibration Template Detection

    PubMed Central

    Donné, Simon; De Vylder, Jonas; Goossens, Bart; Philips, Wilfried

    2016-01-01

    The problem of camera calibration is two-fold. On the one hand, the parameters are estimated from known correspondences between the captured image and the real world. On the other, these correspondences themselves—typically in the form of chessboard corners—need to be found. Many distinct approaches for this feature template extraction are available, often of large computational and/or implementational complexity. We exploit the generalized nature of deep learning networks to detect checkerboard corners: our proposed method is a convolutional neural network (CNN) trained on a large set of example chessboard images, which generalizes several existing solutions. The network is trained explicitly against noisy inputs, as well as inputs with large degrees of lens distortion. The trained network that we evaluate is as accurate as existing techniques while offering improved execution time and increased adaptability to specific situations with little effort. The proposed method is not only robust against the types of degradation present in the training set (lens distortions, and large amounts of sensor noise), but also to perspective deformations, e.g., resulting from multi-camera set-ups. PMID:27827920

  7. Prediction of protein tertiary structure from sequences using a very large back-propagation neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X.; Wilcox, G.L.

    1993-12-31

    We have implemented large scale back-propagation neural networks on a 544 node Connection Machine, CM-5, using the C language in MIMD mode. The program running on 512 processors performs backpropagation learning at 0.53 Gflops, which provides 76 million connection updates per second. We have applied the network to the prediction of protein tertiary structure from sequence information alone. A neural network with one hidden layer and 40 million connections is trained to learn the relationship between sequence and tertiary structure. The trained network yields predicted structures of some proteins on which it has not been trained given only their sequences.more » Presentation of the Fourier transform of the sequences accentuates periodicity in the sequence and yields good generalization with greatly increased training efficiency. Training simulations with a large, heterologous set of protein structures (111 proteins from CM-5 time) to solutions with under 2% RMS residual error within the training set (random responses give an RMS error of about 20%). Presentation of 15 sequences of related proteins in a testing set of 24 proteins yields predicted structures with less than 8% RMS residual error, indicating good apparent generalization.« less

  8. Data Programming: Creating Large Training Sets, Quickly.

    PubMed

    Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher

    2016-12-01

    Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions , which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can "denoise" the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.

  9. Data Programming: Creating Large Training Sets, Quickly

    PubMed Central

    Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher

    2018-01-01

    Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can “denoise” the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable. PMID:29872252

  10. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue

    2015-04-01

    Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

  11. Comparative Initial and Sustained Engagement in Web-based Training by Behavioral Healthcare Providers in New York State.

    PubMed

    Talley, Rachel; Chiang, I-Chin; Covell, Nancy H; Dixon, Lisa

    2018-06-01

    Improved dissemination is critical to implementation of evidence-based practice in community behavioral healthcare settings. Web-based training modalities are a promising strategy for dissemination of evidence-based practice in community behavioral health settings. Initial and sustained engagement of these modalities in large, multidisciplinary community provider samples is not well understood. This study evaluates comparative engagement and user preferences by provider type in a web-based training platform in a large, multidisciplinary community sample of behavioral health staff in New York State. Workforce make-up among platform registrants was compared to the general NYS behavioral health workforce. Training completion by functional job type was compared to characterize user engagement and preferences. Frequently completed modules were classified by credit and requirement incentives. High initial training engagement across professional role was demonstrated, with significant differences in initial and sustained engagement by professional role. The most frequently completed modules across functional job types contained credit or requirement incentives. The analysis demonstrated that high engagement of a web-based training in a multidisciplinary provider audience can be achieved without tailoring content to specific professional roles. Overlap between frequently completed modules and incentives suggests a role for incentives in promoting engagement of web-based training. These findings further the understanding of strategies to promote large-scale dissemination of evidence-based practice in community behavioral health settings.

  12. Linear Vector Quantisation and Uniform Circular Arrays based decoupled two-dimensional angle of arrival estimation

    NASA Astrophysics Data System (ADS)

    Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.

    2017-05-01

    Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.

  13. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    PubMed

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  14. Jig For Stereoscopic Photography

    NASA Technical Reports Server (NTRS)

    Nielsen, David J.

    1990-01-01

    Separations between views adjusted precisely for best results. Simple jig adjusted to set precisely, distance between right and left positions of camera used to make stereoscopic photographs. Camera slides in slot between extreme positions, where it takes stereoscopic pictures. Distance between extreme positions set reproducibly with micrometer. In view of trend toward very-large-scale integration of electronic circuits, training method and jig used to make training photographs useful to many companies to reduce cost of training manufacturing personnel.

  15. Sample Selection for Training Cascade Detectors.

    PubMed

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  16. Machine learning of molecular properties: Locality and active learning

    NASA Astrophysics Data System (ADS)

    Gubaev, Konstantin; Podryabinkin, Evgeny V.; Shapeev, Alexander V.

    2018-06-01

    In recent years, the machine learning techniques have shown great potent1ial in various problems from a multitude of disciplines, including materials design and drug discovery. The high computational speed on the one hand and the accuracy comparable to that of density functional theory on another hand make machine learning algorithms efficient for high-throughput screening through chemical and configurational space. However, the machine learning algorithms available in the literature require large training datasets to reach the chemical accuracy and also show large errors for the so-called outliers—the out-of-sample molecules, not well-represented in the training set. In the present paper, we propose a new machine learning algorithm for predicting molecular properties that addresses these two issues: it is based on a local model of interatomic interactions providing high accuracy when trained on relatively small training sets and an active learning algorithm of optimally choosing the training set that significantly reduces the errors for the outliers. We compare our model to the other state-of-the-art algorithms from the literature on the widely used benchmark tests.

  17. Biostatistical and medical statistics graduate education

    PubMed Central

    2014-01-01

    The development of graduate education in biostatistics and medical statistics is discussed in the context of training within a medical center setting. The need for medical researchers to employ a wide variety of statistical designs in clinical, genetic, basic science and translational settings justifies the ongoing integration of biostatistical training into medical center educational settings and informs its content. The integration of large data issues are a challenge. PMID:24472088

  18. Selection of appropriate training and validation set chemicals for modelling dermal permeability by U-optimal design.

    PubMed

    Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E

    2013-01-01

    Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].

  19. Multilingual Twitter Sentiment Classification: The Role of Human Annotators

    PubMed Central

    Mozetič, Igor; Grčar, Miha; Smailović, Jasmina

    2016-01-01

    What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered. PMID:27149621

  20. Online learning from input versus offline memory evolution in adult word learning: effects of neighborhood density and phonologically related practice.

    PubMed

    Storkel, Holly L; Bontempo, Daniel E; Pak, Natalie S

    2014-10-01

    In this study, the authors investigated adult word learning to determine how neighborhood density and practice across phonologically related training sets influence online learning from input during training versus offline memory evolution during no-training gaps. Sixty-one adults were randomly assigned to learn low- or high-density nonwords. Within each density condition, participants were trained on one set of words and then were trained on a second set of words, consisting of phonological neighbors of the first set. Learning was measured in a picture-naming test. Data were analyzed using multilevel modeling and spline regression. Steep learning during input was observed, with new words from dense neighborhoods and new words that were neighbors of recently learned words (i.e., second-set words) being learned better than other words. In terms of memory evolution, large and significant forgetting was observed during 1-week gaps in training. Effects of density and practice during memory evolution were opposite of those during input. Specifically, forgetting was greater for high-density and second-set words than for low-density and first-set words. High phonological similarity, regardless of source (i.e., known words or recent training), appears to facilitate online learning from input but seems to impede offline memory evolution.

  1. Deep neural nets as a method for quantitative structure-activity relationships.

    PubMed

    Ma, Junshui; Sheridan, Robert P; Liaw, Andy; Dahl, George E; Svetnik, Vladimir

    2015-02-23

    Neural networks were widely used for quantitative structure-activity relationships (QSAR) in the 1990s. Because of various practical issues (e.g., slow on large problems, difficult to train, prone to overfitting, etc.), they were superseded by more robust methods like support vector machine (SVM) and random forest (RF), which arose in the early 2000s. The last 10 years has witnessed a revival of neural networks in the machine learning community thanks to new methods for preventing overfitting, more efficient training algorithms, and advancements in computer hardware. In particular, deep neural nets (DNNs), i.e. neural nets with more than one hidden layer, have found great successes in many applications, such as computer vision and natural language processing. Here we show that DNNs can routinely make better prospective predictions than RF on a set of large diverse QSAR data sets that are taken from Merck's drug discovery effort. The number of adjustable parameters needed for DNNs is fairly large, but our results show that it is not necessary to optimize them for individual data sets, and a single set of recommended parameters can achieve better performance than RF for most of the data sets we studied. The usefulness of the parameters is demonstrated on additional data sets not used in the calibration. Although training DNNs is still computationally intensive, using graphical processing units (GPUs) can make this issue manageable.

  2. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.

  3. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683

  4. Missed Opportunities for Improving Nutrition Through Institutional Food: The Case for Food Worker Training

    PubMed Central

    Deutsch, Jonathan; Patinella, Stefania; Freudenberg, Nicholas

    2013-01-01

    The institutional food sector—including food served in schools, child care settings, hospitals, and senior centers—is a largely untapped resource for public health that may help to arrest increasing rates of obesity and diet-related health problems. To make this case, we estimated the reach of a diverse institutional food sector in 1 large municipality, New York City, in 2012, and explored the potential for improving institutional food by building the skills and nutritional knowledge of foodservice workers through training. Drawing on the research literature and preliminary data collected in New York City, we discuss the dynamics of nutritional decision-making in these settings. Finally, we identify opportunities and challenges associated with training the institutional food workforce to enhance nutrition and health. PMID:23865653

  5. How large a training set is needed to develop a classifier for microarray data?

    PubMed

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  6. The efficacy of a whole body sprint-interval training intervention in an office setting: A feasibility study.

    PubMed

    Gurd, Brendon J; Patel, Jugal; Edgett, Brittany A; Scribbans, Trisha D; Quadrilatero, Joe; Fischer, Steven L

    2018-05-28

    Whole body sprint-interval training (WB-SIT) represents a mode of exercise training that is both time-efficient and does not require access to an exercise facility. The current study examined the feasibility of implementing a WB-SIT intervention in a workplace setting. A total of 747 employees from a large office building were invited to participate with 31 individuals being enrolled in the study. Anthropometrics, aerobic fitness, core and upper body strength, and lower body mobility were assessed before and after a 12-week exercise intervention consisting of 2-4 training sessions per week. Each training session required participants to complete 8, 20-second intervals (separated by 10 seconds of rest) of whole body exercise. Proportion of participation was 4.2% while the response rate was 35% (11/31 participants completed post training testing). In responders, compliance to prescribed training was 83±17%, and significant (p <  0.05) improvements were observed for aerobic fitness, push-up performance and lower body mobility. These results demonstrate the efficacy of WB-FIT for improving fitness and mobility in an office setting, but highlight the difficulties in achieving high rates of participation and response in this setting.

  7. Using National Education Longitudinal Data Sets in School Counseling Research

    ERIC Educational Resources Information Center

    Bryan, Julia A.; Day-Vines, Norma L.; Holcomb-McCoy, Cheryl; Moore-Thomas, Cheryl

    2010-01-01

    National longitudinal databases hold much promise for school counseling researchers. Several of the more frequently used data sets, possible professional implications, and strategies for acquiring training in the use of large-scale national data sets are described. A 6-step process for conducting research with the data sets is explicated:…

  8. Object Classification With Joint Projection and Low-Rank Dictionary Learning.

    PubMed

    Foroughi, Homa; Ray, Nilanjan; Hong Zhang

    2018-02-01

    For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.

  9. Creation of a Unified Set of Core-Collapse Supernovae for Training of Photometric Classifiers

    NASA Astrophysics Data System (ADS)

    D'Arcy Kenworthy, William; Scolnic, Daniel; Kessler, Richard

    2017-01-01

    One of the key tasks for future supernova cosmology analyses is to photometrically distinguish type Ia supernovae (SNe) from their core collapse (CC) counterparts. In order to train programs for this purpose, it is necessary to train on a large number of core-collapse SNe. However, there are only a handful used for current programs. We plan to use the large amount of CC lightcurves available on the Open Supernova Catalog (OSC). Since this data is scraped from many different surveys, it is given in a number of photometric systems with different calibration and filters. We therefore created a program to fit smooth lightcurves (as a function of time) to photometric observations of arbitrary SNe. The Supercal method is then used to translate the smoothed lightcurves to a single photometric system. We can thus compile a training set of 782 supernovae, of which 127 are not type Ia. These smoothed lightcurves are also being contributed upstream to the OSC as derived data.

  10. Training to raise staff awareness about safeguarding children.

    PubMed

    Fleming, Jane

    2015-04-01

    To improve outcomes for children and young people health organisations are required to train all staff in children's safeguarding. This creates difficulties for large complex organisations where most staff provide services to the adult population. Heart of England NHS Foundation Trust is a large acute and community trust that had difficulties in engaging staff in children's safeguarding training. Compliance rates for clinical staff who were trained in children's safeguarding were low and needed to be addressed. This article sets out why safeguarding training is important for all staff and how the trust achieved staff engagement and improved compliance rates. To evaluate, maintain and develop safeguarding knowledge, understanding, skills, attitude and behaviour further resources are planned to allow access to learning resources in a variety of formats.

  11. Automatic Earthquake Detection by Active Learning

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  12. HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.

    PubMed

    Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo

    2016-03-01

    Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  13. Deep learning in the small sample size setting: cascaded feed forward neural networks for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke

    2016-03-01

    Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.

  14. Counselor Training Manual for Resident Environmental Education Camp.

    ERIC Educational Resources Information Center

    Fortman, Kathleen J.

    Designed for use with junior and senior high school students, this manual outlines procedures for recruiting and training counselors to work with upper elementary students in a resident outdoor education setting. Counselors can provide the additional leadership necessary when caring for large groups of students 24 hours a day. However, their…

  15. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  16. [Current status on management and needs related to education and training programs set for new employees at the provincial Centers for Disease Control and Prevention, in China].

    PubMed

    Ma, J; Meng, X D; Luo, H M; Zhou, H C; Qu, S L; Liu, X T; Dai, Z

    2016-06-01

    In order to understand the current management status on education/training and needs for training among new employees working at the provincial CDC in China during 2012-2014, so as to provide basis for setting up related programs at the CDC levels. Based on data gathered through questionnaire surveys run by CDCs from 32 provincial and 5 specifically-designated cities, microsoft excel was used to analyze the current status on management of education and training, for new employees. There were 156 management staff members working on education and training programs in 36 CDCs, with 70% of them having received intermediate or higher levels of education. Large differences were seen on equipment of training hardware in different regions. There were 1 214 teaching staff with 66 percent in the fields or related professional areas on public health, in 2014. 5084 new employees conducted pre/post training programs, from 2012 to 2014 with funding as 750 thousand RMB Yuan. 99.5% of the new employees expressed the needs for further training while. 74% of the new staff members expecting a 2-5 day training program to be implemented. 79% of the new staff members claimed that practice as the most appropriate method for training. Institutional programs set for education and training at the CDCs need to be clarified, with management team organized. It is important to provide more financial support on both hardware, software and human resources related to training programs which are set for new stuff members at all levels of CDCs.

  17. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?

    PubMed

    Tajbakhsh, Nima; Shin, Jae Y; Gurudu, Suryakanth R; Hurst, R Todd; Kendall, Christopher B; Gotway, Michael B; Jianming Liang

    2016-05-01

    Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.

  18. Toward automated biochemotype annotation for large compound libraries.

    PubMed

    Chen, Xian; Liang, Yizeng; Xu, Jun

    2006-08-01

    Combinatorial chemistry allows scientists to probe large synthetically accessible chemical space. However, identifying the sub-space which is selectively associated with an interested biological target, is crucial to drug discovery and life sciences. This paper describes a process to automatically annotate biochemotypes of compounds in a library and thus to identify bioactivity related chemotypes (biochemotypes) from a large library of compounds. The process consists of two steps: (1) predicting all possible bioactivities for each compound in a library, and (2) deriving possible biochemotypes based on predictions. The Prediction of Activity Spectra for Substances program (PASS) was used in the first step. In second step, structural similarity and scaffold-hopping technologies are employed. These technologies are used to derive biochemotypes from bioactivity predictions and the corresponding annotated biochemotypes from MDL Drug Data Report (MDDR) database. About a one million (982,889) commercially available compound library (CACL) has been tested using this process. This paper demonstrates the feasibility of automatically annotating biochemotypes for large libraries of compounds. Nevertheless, some issues need to be considered in order to improve the process. First, the prediction accuracy of PASS program has no significant correlation with the number of compounds in a training set. Larger training sets do not necessarily increase the maximal error of prediction (MEP), nor do they increase the hit structural diversity. Smaller training sets do not necessarily decrease MEP, nor do they decrease the hit structural diversity. Second, the success of systematic bioactivity prediction relies on modeling, training data, and the definition of bioactivities (biochemotype ontology). Unfortunately, the biochemotype ontology was not well developed in the PASS program. Consequently, "ill-defined" bioactivities can reduce the quality of predictions. This paper suggests the ways in which the systematic bioactivities prediction program should be improved.

  19. Video Salient Object Detection via Fully Convolutional Networks.

    PubMed

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).

  20. European consensus on a competency-based virtual reality training program for basic endoscopic surgical psychomotor skills.

    PubMed

    van Dongen, Koen W; Ahlberg, Gunnar; Bonavina, Luigi; Carter, Fiona J; Grantcharov, Teodor P; Hyltander, Anders; Schijven, Marlies P; Stefani, Alessandro; van der Zee, David C; Broeders, Ivo A M J

    2011-01-01

    Virtual reality (VR) simulators have been demonstrated to improve basic psychomotor skills in endoscopic surgery. The exercise configuration settings used for validation in studies published so far are default settings or are based on the personal choice of the tutors. The purpose of this study was to establish consensus on exercise configurations and on a validated training program for a virtual reality simulator, based on the experience of international experts to set criterion levels to construct a proficiency-based training program. A consensus meeting was held with eight European teams, all extensively experienced in using the VR simulator. Construct validity of the training program was tested by 20 experts and 60 novices. The data were analyzed by using the t test for equality of means. Consensus was achieved on training designs, exercise configuration, and examination. Almost all exercises (7/8) showed construct validity. In total, 50 of 94 parameters (53%) showed significant difference. A European, multicenter, validated, training program was constructed according to the general consensus of a large international team with extended experience in virtual reality simulation. Therefore, a proficiency-based training program can be offered to training centers that use this simulator for training in basic psychomotor skills in endoscopic surgery.

  1. Cascade Back-Propagation Learning in Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2003-01-01

    The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights.

  2. Validity and validation of expert (Q)SAR systems.

    PubMed

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  3. Query-based learning for aerospace applications.

    PubMed

    Saad, E W; Choi, J J; Vian, J L; Wunsch, D C Ii

    2003-01-01

    Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem.

  4. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance.

    PubMed

    Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo

    2017-01-01

    We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  5. Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks.

    PubMed

    Cheng, Phillip M; Tejura, Tapas K; Tran, Khoa N; Whang, Gilbert

    2018-05-01

    The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78-0.89). At the maximum Youden index (sensitivity + specificity-1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs.

  6. Evaluating International Research Ethics Capacity Development: An Empirical Approach

    PubMed Central

    Ali, Joseph; Kass, Nancy E.; Sewankambo, Nelson K.; White, Tara D.; Hyder, Adnan A.

    2014-01-01

    The US National Institutes of health, Fogarty International Center (NIH-FIC) has, for the past 13 years, been a leading funder of international research ethics education for resource-limited settings. Nearly half of the NIH-FIC funding in this area has gone to training programs that train individuals from sub-Saharan Africa. Identifying the impact of training investments, as well as the potential predictors of post-training success, can support curricular decision-making, help establish funding priorities, and recognize the ultimate outcomes of trainees and training programs. Comprehensive evaluation frameworks and targeted evaluation tools for bioethics training programs generally, and for international research ethics programs in particular, are largely absent from published literature. This paper shares an original conceptual framework, data collection tool, and detailed methods for evaluating the inputs, processes, outputs, and outcomes of research ethics training programs serving individuals in resource-limited settings. This paper is part of a collection of papers analyzing the Fogarty International Center’s International Research Ethics Education and Curriculum Development program. PMID:24782071

  7. Mining big data sets of plankton images: a zero-shot learning approach to retrieve labels without training data

    NASA Astrophysics Data System (ADS)

    Orenstein, E. C.; Morgado, P. M.; Peacock, E.; Sosik, H. M.; Jaffe, J. S.

    2016-02-01

    Technological advances in instrumentation and computing have allowed oceanographers to develop imaging systems capable of collecting extremely large data sets. With the advent of in situ plankton imaging systems, scientists must now commonly deal with "big data" sets containing tens of millions of samples spanning hundreds of classes, making manual classification untenable. Automated annotation methods are now considered to be the bottleneck between collection and interpretation. Typically, such classifiers learn to approximate a function that predicts a predefined set of classes for which a considerable amount of labeled training data is available. The requirement that the training data span all the classes of concern is problematic for plankton imaging systems since they sample such diverse, rapidly changing populations. These data sets may contain relatively rare, sparsely distributed, taxa that will not have associated training data; a classifier trained on a limited set of classes will miss these samples. The computer vision community, leveraging advances in Convolutional Neural Networks (CNNs), has recently attempted to tackle such problems using "zero-shot" object categorization methods. Under a zero-shot framework, a classifier is trained to map samples onto a set of attributes rather than a class label. These attributes can include visual and non-visual information such as what an organism is made out of, where it is distributed globally, or how it reproduces. A second stage classifier is then used to extrapolate a class. In this work, we demonstrate a zero-shot classifier, implemented with a CNN, to retrieve out-of-training-set labels from images. This method is applied to data from two continuously imaging, moored instruments: the Scripps Plankton Camera System (SPCS) and the Imaging FlowCytobot (IFCB). Results from simulated deployment scenarios indicate zero-shot classifiers could be successful at recovering samples of rare taxa in image sets. This capability will allow ecologists to identify trends in the distribution of difficult to sample organisms in their data.

  8. Effects of Strength Training Sessions Performed with Different Exercise Orders and Intervals on Blood Pressure and Heart Rate Variability.

    PubMed

    Lemos, Sandro; Figueiredo, Tiago; Marques, Silvio; Leite, Thalita; Cardozo, Diogo; Willardson, Jeffrey M; Simão, Roberto

    2018-01-01

    This study compared the effect of a strength training session performed at different exercise orders and rest intervals on blood pressure and heart rate variability (HRV). Fifteen trained men performed different upper body exercise sequences [large to small muscle mass (SEQA) and small to large muscle mass (SEQB)] in randomized order with rest intervals between sets and exercises of 40 or 90 seconds. Fifteen repetition maximum loads were tested to control the training intensity and the total volume load. The results showed, significant reductions for systolic blood pressure (SBP) for all sequences compared to baseline and, post-exercise: SEQA90 at 20, 30, 40, 50 and 60 minutes; SEQA40 and SEQB40 at 20 minutes and SEQB90 at 10, 20, 30, 40, 50 and 60 minutes. For diastolic blood pressure (DBP), significant reductions were found for three sequences compared to baseline and, post-exercise: SEQA90 and SEQA40 at 50 and 60 minutes; SEQB40 at 10, 30 and 60 minutes. For HRV, there were significant differences in frequency domain for all sequences compared to baseline. In conclusion, when performing upper body strength training sessions, it is suggested that 90 second rest intervals between sets and exercises promotes a post-exercise hypotensive response in SBP. The 40 second rest interval between sets and exercises was associated with greater cardiac stress, and might be contraindicated when working with individuals that exhibit symptoms of cardiovascular disease.

  9. Segmentation of knee cartilage by using a hierarchical active shape model based on multi-resolution transforms in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    León, Madeleine; Escalante-Ramirez, Boris

    2013-11-01

    Knee osteoarthritis (OA) is characterized by the morphological degeneration of cartilage. Efficient segmentation of cartilage is important for cartilage damage diagnosis and to support therapeutic responses. We present a method for knee cartilage segmentation in magnetic resonance images (MRI). Our method incorporates the Hermite Transform to obtain a hierarchical decomposition of contours which describe knee cartilage shapes. Then, we compute a statistical model of the contour of interest from a set of training images. Thereby, our Hierarchical Active Shape Model (HASM) captures a large range of shape variability even from a small group of training samples, improving segmentation accuracy. The method was trained with a training set of 16- MRI of knee and tested with leave-one-out method.

  10. Research training of Developmental-Behavioral Pediatrics fellows: a survey of fellowship directors by Developmental-Behavioral Pediatrics Research Network.

    PubMed

    Wiley, Susan; Schonfeld, David J; Fredstrom, Bridget; Huffman, Lynne

    2013-01-01

    To describe research training in Developmental-Behavioral Pediatrics (DBP) Fellowship Programs. Thirty-five US-accredited DBP fellowships were contacted through the Developmental-Behavioral Pediatrics Research Network to complete an online survey on scholarly work and research training. With an 83% response rate, responding programs represented 110 (87 filled) fellowship positions. External funding for fellowship positions was minimal (11 positions fully funded, 13 funded above 50% of cost). Structured research training included didactic lectures, web-based training, university courses, direct mentoring, journal clubs, and required reading. Of the 159 fellows described, spanning a 5-year training period, the majority chose projects relying on their own data collection (57%) rather than joining an existing research study and focused on clinical research (86%). Among 96 fellows with completed scholarly work, 29% were observational/epidemiological studies, 22% secondary analyses of large data sets, 16% community-based research, and 15% survey design. A limited number of fellows pursued basic science, meta-analysis/critical appraisal of the literature, or analysis of public policy. Barriers to successful fellow research are as follows: lack of time and money, challenges in balancing clinical demands and protected faculty research time, limited faculty research opportunities, time or expertise, and a lack of infrastructure for fellow research mentoring. The scholarly work of fellows in DBP fellowship programs has primarily focused on clinical research using observational/epidemiological research and secondary analysis of large data set. Barriers largely in faculty time and expertise for research mentoring and inadequate funding in programs that have high clinical demands and little resources for research efforts were noted.

  11. Deep learning with non-medical training used for chest pathology identification

    NASA Astrophysics Data System (ADS)

    Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit

    2015-03-01

    In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.

  12. Assessing Soldier Individual Differences to Enable Tailored Training

    DTIC Science & Technology

    2010-04-01

    upon effective and efficient training. However, there is ample evidence that learning-related individual differences exist ( Thorndike , 1985; Jensen...in both civilian and military settings (Schmidt, Hunter, & Outerbridge, 1986; Thorndike , 1985). Prior knowledge or knowledge of facts and...predictive power ( Thorndike , 1985; Jensen, 1998). Further, there is a good deal of evidence that general mental ability impacts performance largely

  13. iLab 20M: A Large-scale Controlled Object Dataset to Investigate Deep Learning

    DTIC Science & Technology

    2016-07-01

    and train) and anno - tate them with rotation labels. Alexnet is fine tuned on the training set. We set the learning rate for all the layers to 0.001...Azizpour, A. Razavian, J . Sullivan, A. Maki, and S. Carls- son. From generic to specific deep representations for visual recognition. In CVPR...113–120. IEEE, 2014. 2 [5] J . Bromley, J . W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. Säckinger, and R. Shah. Signature verifica- tion using

  14. Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.

    PubMed

    Ionescu, Catalin; Papava, Dragos; Olaru, Vlad; Sminchisescu, Cristian

    2014-07-01

    We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.

  15. A Model-independent Photometric Redshift Estimator for Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Wang, Yun

    2007-01-01

    The use of Type Ia supernovae (SNe Ia) as cosmological standard candles is fundamental in modern observational cosmology. In this Letter, we derive a simple empirical photometric redshift estimator for SNe Ia using a training set of SNe Ia with multiband (griz) light curves and spectroscopic redshifts obtained by the Supernova Legacy Survey (SNLS). This estimator is analytical and model-independent it does not use spectral templates. We use all the available SNe Ia from SNLS with near-maximum photometry in griz (a total of 40 SNe Ia) to train and test our photometric redshift estimator. The difference between the estimated redshifts zphot and the spectroscopic redshifts zspec, (zphot-zspec)/(1+zspec), has rms dispersions of 0.031 for 20 SNe Ia used in the training set, and 0.050 for 20 SNe Ia not used in the training set. The dispersion is of the same order of magnitude as the flux uncertainties at peak brightness for the SNe Ia. There are no outliers. This photometric redshift estimator should significantly enhance the ability of observers to accurately target high-redshift SNe Ia for spectroscopy in ongoing surveys. It will also dramatically boost the cosmological impact of very large future supernova surveys, such as those planned for the Advanced Liquid-mirror Probe for Astrophysics, Cosmology, and Asteroids (ALPACA) and the Large Synoptic Survey Telescope (LSST).

  16. Comparison of the applicability domain of a quantitative structure-activity relationship for estrogenicity with a large chemical inventory.

    PubMed

    Netzeva, Tatiana I; Gallegos Saliner, Ana; Worth, Andrew P

    2006-05-01

    The aim of the present study was to illustrate that it is possible and relatively straightforward to compare the domain of applicability of a quantitative structure-activity relationship (QSAR) model in terms of its physicochemical descriptors with a large inventory of chemicals. A training set of 105 chemicals with data for relative estrogenic gene activation, obtained in a recombinant yeast assay, was used to develop the QSAR. A binary classification model for predicting active versus inactive chemicals was developed using classification tree analysis and two descriptors with a clear physicochemical meaning (octanol-water partition coefficient, or log Kow, and the number of hydrogen bond donors, or n(Hdon)). The model demonstrated a high overall accuracy (90.5%), with a sensitivity of 95.9% and a specificity of 78.1%. The robustness of the model was evaluated using the leave-many-out cross-validation technique, whereas the predictivity was assessed using an artificial external test set composed of 12 compounds. The domain of the QSAR training set was compared with the chemical space covered by the European Inventory of Existing Commercial Chemical Substances (EINECS), as incorporated in the CDB-EC software, in the log Kow / n(Hdon) plane. The results showed that the training set and, therefore, the applicability domain of the QSAR model covers a small part of the physicochemical domain of the inventory, even though a simple method for defining the applicability domain (ranges in the descriptor space) was used. However, a large number of compounds are located within the narrow descriptor window.

  17. Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.

    PubMed

    Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just

    2016-01-01

    Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.

  18. Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines

    PubMed Central

    Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just

    2016-01-01

    Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639

  19. Improving the performance of extreme learning machine for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong

    2015-05-01

    Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.

  20. Downlink Training Techniques for FDD Massive MIMO Systems: Open-Loop and Closed-Loop Training With Memory

    NASA Astrophysics Data System (ADS)

    Choi, Junil; Love, David J.; Bidigare, Patrick

    2014-10-01

    The concept of deploying a large number of antennas at the base station, often called massive multiple-input multiple-output (MIMO), has drawn considerable interest because of its potential ability to revolutionize current wireless communication systems. Most literature on massive MIMO systems assumes time division duplexing (TDD), although frequency division duplexing (FDD) dominates current cellular systems. Due to the large number of transmit antennas at the base station, currently standardized approaches would require a large percentage of the precious downlink and uplink resources in FDD massive MIMO be used for training signal transmissions and channel state information (CSI) feedback. To reduce the overhead of the downlink training phase, we propose practical open-loop and closed-loop training frameworks in this paper. We assume the base station and the user share a common set of training signals in advance. In open-loop training, the base station transmits training signals in a round-robin manner, and the user successively estimates the current channel using long-term channel statistics such as temporal and spatial correlations and previous channel estimates. In closed-loop training, the user feeds back the best training signal to be sent in the future based on channel prediction and the previously received training signals. With a small amount of feedback from the user to the base station, closed-loop training offers better performance in the data communication phase, especially when the signal-to-noise ratio is low, the number of transmit antennas is large, or prior channel estimates are not accurate at the beginning of the communication setup, all of which would be mostly beneficial for massive MIMO systems.

  1. Analysis of Decision Making Skills for Large Scale Disaster Response

    DTIC Science & Technology

    2015-08-21

    Capability to influence and collaborate Compassion Teamwork Communication Leadership Provide vision of outcome / set priorities Confidence, courage to make...project evaluates the viability of expanding the use of serious games to augment classroom training, tabletop and full scale exercise, and actual...training, evaluation, analysis, and technology ex- ploration. Those techniques have found successful niches, but their wider applicability faces

  2. Examination of Individual Differences in Outcomes from a Randomized Controlled Clinical Trial Comparing Formal and Informal Individual Auditory Training Programs

    ERIC Educational Resources Information Center

    Smith, Sherri L.; Saunders, Gabrielle H.; Chisolm, Theresa H.; Frederick, Melissa; Bailey, Beth A.

    2016-01-01

    Purpose: The purpose of this study was to determine if patient characteristics or clinical variables could predict who benefits from individual auditory training. Method: A retrospective series of analyses were performed using a data set from a large, multisite, randomized controlled clinical trial that compared the treatment effects of at-home…

  3. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.

  4. Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography.

    PubMed

    Samala, Ravi K; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A; Wei, Jun; Cha, Kenny

    2016-12-01

    Develop a computer-aided detection (CAD) system for masses in digital breast tomosynthesis (DBT) volume using a deep convolutional neural network (DCNN) with transfer learning from mammograms. A data set containing 2282 digitized film and digital mammograms and 324 DBT volumes were collected with IRB approval. The mass of interest on the images was marked by an experienced breast radiologist as reference standard. The data set was partitioned into a training set (2282 mammograms with 2461 masses and 230 DBT views with 228 masses) and an independent test set (94 DBT views with 89 masses). For DCNN training, the region of interest (ROI) containing the mass (true positive) was extracted from each image. False positive (FP) ROIs were identified at prescreening by their previously developed CAD systems. After data augmentation, a total of 45 072 mammographic ROIs and 37 450 DBT ROIs were obtained. Data normalization and reduction of non-uniformity in the ROIs across heterogeneous data was achieved using a background correction method applied to each ROI. A DCNN with four convolutional layers and three fully connected (FC) layers was first trained on the mammography data. Jittering and dropout techniques were used to reduce overfitting. After training with the mammographic ROIs, all weights in the first three convolutional layers were frozen, and only the last convolution layer and the FC layers were randomly initialized again and trained using the DBT training ROIs. The authors compared the performances of two CAD systems for mass detection in DBT: one used the DCNN-based approach and the other used their previously developed feature-based approach for FP reduction. The prescreening stage was identical in both systems, passing the same set of mass candidates to the FP reduction stage. For the feature-based CAD system, 3D clustering and active contour method was used for segmentation; morphological, gray level, and texture features were extracted and merged with a linear discriminant classifier to score the detected masses. For the DCNN-based CAD system, ROIs from five consecutive slices centered at each candidate were passed through the trained DCNN and a mass likelihood score was generated. The performances of the CAD systems were evaluated using free-response ROC curves and the performance difference was analyzed using a non-parametric method. Before transfer learning, the DCNN trained only on mammograms with an AUC of 0.99 classified DBT masses with an AUC of 0.81 in the DBT training set. After transfer learning with DBT, the AUC improved to 0.90. For breast-based CAD detection in the test set, the sensitivity for the feature-based and the DCNN-based CAD systems was 83% and 91%, respectively, at 1 FP/DBT volume. The difference between the performances for the two systems was statistically significant (p-value < 0.05). The image patterns learned from the mammograms were transferred to the mass detection on DBT slices through the DCNN. This study demonstrated that large data sets collected from mammography are useful for developing new CAD systems for DBT, alleviating the problem and effort of collecting entirely new large data sets for the new modality.

  5. The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.

    PubMed

    Nowak, Markus; Castellini, Claudio

    2016-01-01

    Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.

  6. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Training models of anatomic shape variability

    PubMed Central

    Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang

    2008-01-01

    Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919

  8. A method of genotyping by pedigree-based training-set for identification of QTLs associated with cucumber fruit size

    USDA-ARS?s Scientific Manuscript database

    Large sets of genomic data are becoming available for cucumber (Cucumis sativus), yet there is no tool for whole genome genotyping. Creation of saturated genetic maps depends on development of good markers. The present cucumber genetic maps are based on several hundreds of markers. However they are ...

  9. Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao

    2017-11-01

    Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.

  10. Online self-administered training for post-traumatic stress disorder treatment providers: design and methods for a randomized, prospective intervention study

    PubMed Central

    2012-01-01

    This paper presents the rationale and methods for a randomized controlled evaluation of web-based training in motivational interviewing, goal setting, and behavioral task assignment. Web-based training may be a practical and cost-effective way to address the need for large-scale mental health training in evidence-based practice; however, there is a dearth of well-controlled outcome studies of these approaches. For the current trial, 168 mental health providers treating post-traumatic stress disorder (PTSD) were assigned to web-based training plus supervision, web-based training, or training-as-usual (control). A novel standardized patient (SP) assessment was developed and implemented for objective measurement of changes in clinical skills, while on-line self-report measures were used for assessing changes in knowledge, perceived self-efficacy, and practice related to cognitive behavioral therapy (CBT) techniques. Eligible participants were all actively involved in mental health treatment of veterans with PTSD. Study methodology illustrates ways of developing training content, recruiting participants, and assessing knowledge, perceived self-efficacy, and competency-based outcomes, and demonstrates the feasibility of conducting prospective studies of training efficacy or effectiveness in large healthcare systems. PMID:22583520

  11. Building an evidence-base for the training of evidence-based treatments in community settings: Use of an expert-informed approach.

    PubMed

    Scudder, Ashley; Herschell, Amy D

    2015-08-01

    In order to make EBTs available to a large number of children and families, developers and expert therapists have used their experience and expertise to train community-based therapists in EBTs. Understanding current training practices of treatment experts may be one method for establishing best practices for training community-based therapists prior to comprehensive empirical examinations of training practices. A qualitative study was conducted using surveys and phone interviews to identify the specific procedures used by treatment experts to train and implement an evidence-based treatment in community settings. Twenty-three doctoral-level, clinical psychologists were identified to participate because of their expertise in conducting and training Parent-Child Interaction Therapy. Semi-structured qualitative interviews were completed by phone, later transcribed verbatim, and analyzed using thematic coding. The de-identified data were coded by two independent qualitative data researchers and then compared for consistency of interpretation. The themes that emerged following the final coding were used to construct a training protocol to be empirically tested. The goal of this paper is to not only understand the current state of training practices for training therapists in a particular EBT, Parent-Child Interaction Therapy, but to illustrate the use of expert opinion as the best available evidence in preparation for empirical evaluation.

  12. From genus to phylum: large-subunit and internal transcribed spacer rRNA operon regions show similar classification accuracies influenced by database composition.

    PubMed

    Porras-Alfaro, Andrea; Liu, Kuan-Liang; Kuske, Cheryl R; Xie, Gary

    2014-02-01

    We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5' section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets.

  13. From Genus to Phylum: Large-Subunit and Internal Transcribed Spacer rRNA Operon Regions Show Similar Classification Accuracies Influenced by Database Composition

    PubMed Central

    Liu, Kuan-Liang; Kuske, Cheryl R.

    2014-01-01

    We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5′ section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets. PMID:24242255

  14. Electronegativity Equalization Method: Parameterization and Validation for Large Sets of Organic, Organohalogene and Organometal Molecule

    PubMed Central

    Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav

    2007-01-01

    The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.

  15. Interrater Reliability in Large-Scale Assessments--Can Teachers Score National Tests Reliably without External Controls?

    ERIC Educational Resources Information Center

    Pantzare, Anna Lind

    2015-01-01

    In most large-scale assessment systems a set of rather expensive external quality controls are implemented in order to guarantee the quality of interrater reliability. This study empirically examines if teachers' ratings of national tests in mathematics can be reliable without using monitoring, training, or other methods of external quality…

  16. Testicular Self Examination--Knowledge of Men Attending a Large Genito Urinary Medicine Clinic

    ERIC Educational Resources Information Center

    Handy, Pauline; Sankar, K. Nathan

    2008-01-01

    Objective: To elicit the level of knowledge, training and preferences of men in relation to Testicular Self Examination (TSE). Setting: The Genito Urinary Medicine (GUM) department of a large teaching hospital in the North East of England. The open access clinic serves patients from Newcastle upon Tyne, Northumberland, Gateshead and surrounding…

  17. Convolutional neural networks for an automatic classification of prostate tissue slides with high-grade Gleason score

    NASA Astrophysics Data System (ADS)

    Jiménez del Toro, Oscar; Atzori, Manfredo; Otálora, Sebastian; Andersson, Mats; Eurén, Kristian; Hedlund, Martin; Rönnquist, Peter; Müller, Henning

    2017-03-01

    The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high-grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions-of-interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high vs. low Gleason grade. Grades 7-8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re-training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.

  18. Automated Probabilistic Reconstruction of White-Matter Pathways in Health and Disease Using an Atlas of the Underlying Anatomy

    PubMed Central

    Yendiki, Anastasia; Panneck, Patricia; Srinivasan, Priti; Stevens, Allison; Zöllei, Lilla; Augustinack, Jean; Wang, Ruopeng; Salat, David; Ehrlich, Stefan; Behrens, Tim; Jbabdi, Saad; Gollub, Randy; Fischl, Bruce

    2011-01-01

    We have developed a method for automated probabilistic reconstruction of a set of major white-matter pathways from diffusion-weighted MR images. Our method is called TRACULA (TRActs Constrained by UnderLying Anatomy) and utilizes prior information on the anatomy of the pathways from a set of training subjects. By incorporating this prior knowledge in the reconstruction procedure, our method obviates the need for manual interaction with the tract solutions at a later stage and thus facilitates the application of tractography to large studies. In this paper we illustrate the application of the method on data from a schizophrenia study and investigate whether the inclusion of both patients and healthy subjects in the training set affects our ability to reconstruct the pathways reliably. We show that, since our method does not constrain the exact spatial location or shape of the pathways but only their trajectory relative to the surrounding anatomical structures, a set a of healthy training subjects can be used to reconstruct the pathways accurately in patients as well as in controls. PMID:22016733

  19. Dismantling the Active Ingredients of an Intervention for Children with Autism.

    PubMed

    Pellecchia, Melanie; Connell, James E; Beidas, Rinad S; Xie, Ming; Marcus, Steven C; Mandell, David S

    2015-09-01

    This study evaluated the association of fidelity to each of the components of the Strategies for Teaching based on Autism Research (STAR) program, a comprehensive treatment package for children with autism that includes discrete trial training, pivotal response training, and teaching in functional routines, on outcomes for 191 students ages 5-8 years in a large public school district. Fidelity to all components was relatively low, despite considerable training and support, suggesting the need to develop new implementation strategies. Fidelity to pivotal response training, but not discrete trial training or functional routines, was positively associated with gains in cognitive ability despite low levels of fidelity, and may be an effective intervention choice in under-resourced settings.

  20. Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography

    PubMed Central

    Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Wei, Jun; Cha, Kenny

    2016-01-01

    Purpose: Develop a computer-aided detection (CAD) system for masses in digital breast tomosynthesis (DBT) volume using a deep convolutional neural network (DCNN) with transfer learning from mammograms. Methods: A data set containing 2282 digitized film and digital mammograms and 324 DBT volumes were collected with IRB approval. The mass of interest on the images was marked by an experienced breast radiologist as reference standard. The data set was partitioned into a training set (2282 mammograms with 2461 masses and 230 DBT views with 228 masses) and an independent test set (94 DBT views with 89 masses). For DCNN training, the region of interest (ROI) containing the mass (true positive) was extracted from each image. False positive (FP) ROIs were identified at prescreening by their previously developed CAD systems. After data augmentation, a total of 45 072 mammographic ROIs and 37 450 DBT ROIs were obtained. Data normalization and reduction of non-uniformity in the ROIs across heterogeneous data was achieved using a background correction method applied to each ROI. A DCNN with four convolutional layers and three fully connected (FC) layers was first trained on the mammography data. Jittering and dropout techniques were used to reduce overfitting. After training with the mammographic ROIs, all weights in the first three convolutional layers were frozen, and only the last convolution layer and the FC layers were randomly initialized again and trained using the DBT training ROIs. The authors compared the performances of two CAD systems for mass detection in DBT: one used the DCNN-based approach and the other used their previously developed feature-based approach for FP reduction. The prescreening stage was identical in both systems, passing the same set of mass candidates to the FP reduction stage. For the feature-based CAD system, 3D clustering and active contour method was used for segmentation; morphological, gray level, and texture features were extracted and merged with a linear discriminant classifier to score the detected masses. For the DCNN-based CAD system, ROIs from five consecutive slices centered at each candidate were passed through the trained DCNN and a mass likelihood score was generated. The performances of the CAD systems were evaluated using free-response ROC curves and the performance difference was analyzed using a non-parametric method. Results: Before transfer learning, the DCNN trained only on mammograms with an AUC of 0.99 classified DBT masses with an AUC of 0.81 in the DBT training set. After transfer learning with DBT, the AUC improved to 0.90. For breast-based CAD detection in the test set, the sensitivity for the feature-based and the DCNN-based CAD systems was 83% and 91%, respectively, at 1 FP/DBT volume. The difference between the performances for the two systems was statistically significant (p-value < 0.05). Conclusions: The image patterns learned from the mammograms were transferred to the mass detection on DBT slices through the DCNN. This study demonstrated that large data sets collected from mammography are useful for developing new CAD systems for DBT, alleviating the problem and effort of collecting entirely new large data sets for the new modality. PMID:27908154

  1. Individualised training to address variability of radiologists' performance

    NASA Astrophysics Data System (ADS)

    Sun, Shanghua; Taylor, Paul; Wilkinson, Louise; Khoo, Lisanne

    2008-03-01

    Computer-based tools are increasingly used for training and the continuing professional development of radiologists. We propose an adaptive training system to support individualised learning in mammography, based on a set of real cases, which are annotated with educational content by experienced breast radiologists. The system has knowledge of the strengths and weakness of each radiologist's performance: each radiologist is assessed to compute a profile showing how they perform on different sets of cases, classified by type of abnormality, breast density, and perceptual difficulty. We also assess variability in cognitive aspects of image perception, classifying errors made by radiologists as errors of search, recognition or decision. This is a novel element in our approach. The profile is used to select cases to present to the radiologist. The intelligent and flexible presentation of these cases distinguishes our system from existing training tools. The training cases are organised and indexed by an ontology we have developed for breast radiologist training, which is consistent with the radiologists' profile. Hence, the training system is able to select appropriate cases to compose an individualised training path, addressing the variability of the radiologists' performance. A substantial part of the system, the ontology has been evaluated on a large number of cases, and the training system is under implementation for further evaluation.

  2. A Comprehensive Training Data Set for the Development of Satellite-Based Volcanic Ash Detection Algorithms

    NASA Astrophysics Data System (ADS)

    Schmidl, Marius

    2017-04-01

    We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.

  3. Unsupervised classification of variable stars

    NASA Astrophysics Data System (ADS)

    Valenzuela, Lucas; Pichara, Karim

    2018-03-01

    During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.

  4. [Behaviour therapy and child welfare - results of an approach to improve mental health care of aggressive children].

    PubMed

    Nitkowski, Dennis; Petermann, Franz; Büttner, Peter; Krause-Leipoldt, Carsten; Petermann, Ulrike

    2009-09-01

    The Training with Aggressive Children (Petermann & Petermann, 2008) was integrated into the setting of a child welfare service. This study examined, if mental health care of aggressive children in child welfare settings can be improved, compared the effectiveness of a combination of the training and child welfare intervention after six months with effects of the TAK. 25 Children with conduct problems (24 boys, one girl) aged 7;6 to 13;0 years participated in the study. A pretest-follow up comparison of parent ratings on the Child Behavior Checklist (CBCL) documented a large reduction of aggressive-delinquent behaviour and social problems in the training and child welfare group. Furthermore, conduct and peer relationship problems decreased essentially on the Strengths and Difficulties Questionnaire (SDQ). By reducing conduct, attention and social problems, and delinquent behaviour, the therapeutic outcome of the training and child welfare group was clearly superior to training group. In comparison to the training, the combination of child welfare and training seemed to reduce a wider range of behavioural problems more effectively. This indicates that combined intervention programs can optimize mental health care of aggressive children.

  5. PDA usage and training: targeting curriculum for residents and faculty.

    PubMed

    Morris, Carl G; Church, Lili; Vincent, Chris; Rao, Ashwin

    2007-06-01

    Utilization of personal digital assistants (PDAs) in residency education is common, but information about their use and how residents are trained to use them is limited. Better understanding of resident and faculty PDA use and training is needed. We used a cross-sectional survey of 598 residents and faculty from the WWAMI (Washington, Wyoming, Alaska, Montana, and Idaho) Family Medicine Residency Network regarding PDA usage and training. Use of PDAs is common among residents (94%) and faculty (79%). Ninety-six percent of faculty and residents report stable or increasing frequency of use over time. The common barriers to PDA use relate to lack of time, knowledge, and formal education. Approximately half of PDA users (52%) have received some formal training; however, the majority of users report being self-taught. Faculty and residents prefer either small-group or one-on-one settings with hands-on, self-directed, interactive formats for PDA training. Large-group settings in lecture, written, or computer program formats were considered less helpful or desirable. PDAs have become a commonly used clinical tool. Lack of time and adequate training present a barrier to optimal application of PDAs in family medicine residency education.

  6. Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies

    PubMed Central

    2010-01-01

    Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194

  7. Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.

    PubMed

    David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A

    2010-02-08

    All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.

  8. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  9. Using operant conditioning and desensitization to facilitate veterinary care with captive reptiles.

    PubMed

    Hellmuth, Heidi; Augustine, Lauren; Watkins, Barbara; Hope, Katharine

    2012-09-01

    In addition to being a large component of most zoological collections, reptile species are becoming more popular as family pets. Reptiles have the cognitive ability to be trained to facilitate daily husbandry and veterinary care. Desensitization and operant conditioning can alleviate some of the behavioral and physiological challenges of treating these species. A survey of reptile training programs at zoos in the United States and worldwide reveals that there are many successful training programs to facilitate veterinary care and minimize stress to the animal. Many of the techniques being used to train reptiles in zoological settings are transferable to the exotic pet clinician. Published by Elsevier Inc.

  10. Ranking the whole MEDLINE database according to a large training set using text indexing.

    PubMed

    Suomela, Brian P; Andrade, Miguel A

    2005-03-24

    The MEDLINE database contains over 12 million references to scientific literature, with about 3/4 of recent articles including an abstract of the publication. Retrieval of entries using queries with keywords is useful for human users that need to obtain small selections. However, particular analyses of the literature or database developments may need the complete ranking of all the references in the MEDLINE database as to their relevance to a topic of interest. This report describes a method that does this ranking using the differences in word content between MEDLINE entries related to a topic and the whole of MEDLINE, in a computational time appropriate for an article search query engine. We tested the capabilities of our system to retrieve MEDLINE references which are relevant to the subject of stem cells. We took advantage of the existing annotation of references with terms from the MeSH hierarchical vocabulary (Medical Subject Headings, developed at the National Library of Medicine). A training set of 81,416 references was constructed by selecting entries annotated with the MeSH term stem cells or some child in its sub tree. Frequencies of all nouns, verbs, and adjectives in the training set were computed and the ratios of word frequencies in the training set to those in the entire MEDLINE were used to score references. Self-consistency of the algorithm, benchmarked with a test set containing the training set and an equal number of references randomly selected from MEDLINE was better using nouns (79%) than adjectives (73%) or verbs (70%). The evaluation of the system with 6,923 references not used for training, containing 204 articles relevant to stem cells according to a human expert, indicated a recall of 65% for a precision of 65%. This strategy appears to be useful for predicting the relevance of MEDLINE references to a given concept. The method is simple and can be used with any user-defined training set. Choice of the part of speech of the words used for classification has important effects on performance. Lists of words, scripts, and additional information are available from the web address http://www.ogic.ca/projects/ks2004/.

  11. Snorkel: Rapid Training Data Creation with Weak Supervision.

    PubMed

    Ratner, Alexander; Bach, Stephen H; Ehrenberg, Henry; Fries, Jason; Wu, Sen; Ré, Christopher

    2017-11-01

    Labeling training data is increasingly the largest bottleneck in deploying machine learning systems. We present Snorkel, a first-of-its-kind system that enables users to train state-of- the-art models without hand labeling any training data. Instead, users write labeling functions that express arbitrary heuristics, which can have unknown accuracies and correlations. Snorkel denoises their outputs without access to ground truth by incorporating the first end-to-end implementation of our recently proposed machine learning paradigm, data programming. We present a flexible interface layer for writing labeling functions based on our experience over the past year collaborating with companies, agencies, and research labs. In a user study, subject matter experts build models 2.8× faster and increase predictive performance an average 45.5% versus seven hours of hand labeling. We study the modeling tradeoffs in this new setting and propose an optimizer for automating tradeoff decisions that gives up to 1.8× speedup per pipeline execution. In two collaborations, with the U.S. Department of Veterans Affairs and the U.S. Food and Drug Administration, and on four open-source text and image data sets representative of other deployments, Snorkel provides 132% average improvements to predictive performance over prior heuristic approaches and comes within an average 3.60% of the predictive performance of large hand-curated training sets.

  12. Semi-automatic ground truth generation using unsupervised clustering and limited manual labeling: Application to handwritten character recognition

    PubMed Central

    Vajda, Szilárd; Rangoni, Yves; Cecotti, Hubert

    2015-01-01

    For training supervised classifiers to recognize different patterns, large data collections with accurate labels are necessary. In this paper, we propose a generic, semi-automatic labeling technique for large handwritten character collections. In order to speed up the creation of a large scale ground truth, the method combines unsupervised clustering and minimal expert knowledge. To exploit the potential discriminant complementarities across features, each character is projected into five different feature spaces. After clustering the images in each feature space, the human expert labels the cluster centers. Each data point inherits the label of its cluster’s center. A majority (or unanimity) vote decides the label of each character image. The amount of human involvement (labeling) is strictly controlled by the number of clusters – produced by the chosen clustering approach. To test the efficiency of the proposed approach, we have compared, and evaluated three state-of-the art clustering methods (k-means, self-organizing maps, and growing neural gas) on the MNIST digit data set, and a Lampung Indonesian character data set, respectively. Considering a k-nn classifier, we show that labeling manually only 1.3% (MNIST), and 3.2% (Lampung) of the training data, provides the same range of performance than a completely labeled data set would. PMID:25870463

  13. Classification of breast MRI lesions using small-size training sets: comparison of deep learning approaches

    NASA Astrophysics Data System (ADS)

    Amit, Guy; Ben-Ari, Rami; Hadad, Omer; Monovich, Einat; Granot, Noa; Hashoul, Sharbell

    2017-03-01

    Diagnostic interpretation of breast MRI studies requires meticulous work and a high level of expertise. Computerized algorithms can assist radiologists by automatically characterizing the detected lesions. Deep learning approaches have shown promising results in natural image classification, but their applicability to medical imaging is limited by the shortage of large annotated training sets. In this work, we address automatic classification of breast MRI lesions using two different deep learning approaches. We propose a novel image representation for dynamic contrast enhanced (DCE) breast MRI lesions, which combines the morphological and kinetics information in a single multi-channel image. We compare two classification approaches for discriminating between benign and malignant lesions: training a designated convolutional neural network and using a pre-trained deep network to extract features for a shallow classifier. The domain-specific trained network provided higher classification accuracy, compared to the pre-trained model, with an area under the ROC curve of 0.91 versus 0.81, and an accuracy of 0.83 versus 0.71. Similar accuracy was achieved in classifying benign lesions, malignant lesions, and normal tissue images. The trained network was able to improve accuracy by using the multi-channel image representation, and was more robust to reductions in the size of the training set. A small-size convolutional neural network can learn to accurately classify findings in medical images using only a few hundred images from a few dozen patients. With sufficient data augmentation, such a network can be trained to outperform a pre-trained out-of-domain classifier. Developing domain-specific deep-learning models for medical imaging can facilitate technological advancements in computer-aided diagnosis.

  14. Using Functional or Structural Magnetic Resonance Images and Personal Characteristic Data to Identify ADHD and Autism

    PubMed Central

    Ghiassian, Sina; Greiner, Russell; Jin, Ping; Brown, Matthew R. G.

    2016-01-01

    A clinical tool that can diagnose psychiatric illness using functional or structural magnetic resonance (MR) brain images has the potential to greatly assist physicians and improve treatment efficacy. Working toward the goal of automated diagnosis, we propose an approach for automated classification of ADHD and autism based on histogram of oriented gradients (HOG) features extracted from MR brain images, as well as personal characteristic data features. We describe a learning algorithm that can produce effective classifiers for ADHD and autism when run on two large public datasets. The algorithm is able to distinguish ADHD from control with hold-out accuracy of 69.6% (over baseline 55.0%) using personal characteristics and structural brain scan features when trained on the ADHD-200 dataset (769 participants in training set, 171 in test set). It is able to distinguish autism from control with hold-out accuracy of 65.0% (over baseline 51.6%) using functional images with personal characteristic data when trained on the Autism Brain Imaging Data Exchange (ABIDE) dataset (889 participants in training set, 222 in test set). These results outperform all previously presented methods on both datasets. To our knowledge, this is the first demonstration of a single automated learning process that can produce classifiers for distinguishing patients vs. controls from brain imaging data with above-chance accuracy on large datasets for two different psychiatric illnesses (ADHD and autism). Working toward clinical applications requires robustness against real-world conditions, including the substantial variability that often exists among data collected at different institutions. It is therefore important that our algorithm was successful with the large ADHD-200 and ABIDE datasets, which include data from hundreds of participants collected at multiple institutions. While the resulting classifiers are not yet clinically relevant, this work shows that there is a signal in the (f)MRI data that a learning algorithm is able to find. We anticipate this will lead to yet more accurate classifiers, over these and other psychiatric disorders, working toward the goal of a clinical tool for high accuracy differential diagnosis. PMID:28030565

  15. Improvement of Repeated-Sprint Ability and Horizontal-Jumping Performance in Elite Young Basketball Players With Low-Volume Repeated-Maximal-Power Training.

    PubMed

    Gonzalo-Skok, Oliver; Tous-Fajardo, Julio; Arjol-Serrano, José Luis; Suarez-Arrones, Luis; Casajús, José Antonio; Mendez-Villanueva, Alberto

    2016-05-01

    To examine the effects of a low-volume repeated-power-ability (RPA) training program on repeated-sprint and change-of- direction (COD) ability and functional jumping performance. Twenty-two male elite young basketball players (age 16.2 ± 1.2 y, height 190.0 ± 10.0 cm, body mass 82.9 ± 10.1 kg) were randomly assigned either to an RPA-training group (n = 11) or a control group (n = 11). RPA training consisted of leg-press exercise, twice a week for 6 wk, of 1 or 2 blocks of 5 sets × 5 repetitions with 20 s of passive recovery between sets and 3 min between blocks with the load that maximized power output. Before and after training, performance was assessed by a repeated-sprint-ability (RSA) test, a repeated-COD-ability test, a hop for distance, and a drop jump followed by tests of a double unilateral hop with the right and left legs. Within-group and between-groups differences showed substantial improvements in slowest (RSAs) and mean time (RSAm) on RSA; best, slowest and mean time on repeated-COD ability; and unilateral right and left hop in the RPA group in comparison with control. While best time on RSA showed no improvement in any group, there was a large relationship (r = .68, 90% CI .43;.84) between the relative decrement in RSAm and RSAs, suggesting better sprint maintenance with RPA training. The relative improvements in best and mean repeated-COD ability were very largely correlated (r = .89, 90% CI .77;.94). Six weeks of lowvolume (4-14 min/wk) RPA training improved several physical-fitness tests in basketball players.

  16. Development and Two-Year Follow-Up Evaluation of a Training Workshop for the Large Preventive Positive Psychology Happy Family Kitchen Project in Hong Kong

    PubMed Central

    Lai, Agnes Y.; Mui, Moses W.; Wan, Alice; Stewart, Sunita M.; Yew, Carol; Lam, Tai-hing; Chan, Sophia S.

    2016-01-01

    Evidence-based practice and capacity-building approaches are essential for large-scale health promotion interventions. However, there are few models in the literature to guide and evaluate training of social service workers in community settings. This paper presents the development and evaluation of the “train-the-trainer” workshop (TTT) for the first large scale, community-based, family intervention projects, entitled “Happy Family Kitchen Project” (HFK) under the FAMILY project, a Hong Kong Jockey Club Initiative for a Harmonious Society. The workshop aimed to enhance social workers’ competence and performance in applying positive psychology constructs in their family interventions under HFK to improve family well-being of the community they served. The two-day TTT was developed and implemented by a multidisciplinary team in partnership with community agencies to 50 social workers (64% women). It focused on the enhancement of knowledge, attitude, and practice of five specific positive psychology themes, which were the basis for the subsequent development of the 23 family interventions for 1419 participants. Acceptability and applicability were enhanced by completing a needs assessment prior to the training. The TTT was evaluated by trainees’ reactions to the training content and design, changes in learners (trainees) and benefits to the service organizations. Focus group interviews to evaluate the workshop at three months after the training, and questionnaire survey at pre-training, immediately after, six months, one year and two years after training were conducted. There were statistically significant increases with large to moderate effect size in perceived knowledge, self-efficacy and practice after training, which sustained to 2-year follow-up. Furthermore, there were statistically significant improvements in family communication and well-being of the participants in the HFK interventions they implemented after training. This paper offers a practical example of development, implementation and model-based evaluation of training programs, which may be helpful to others seeking to develop such programs in diverse communities. PMID:26808541

  17. Development and Two-Year Follow-Up Evaluation of a Training Workshop for the Large Preventive Positive Psychology Happy Family Kitchen Project in Hong Kong.

    PubMed

    Lai, Agnes Y; Mui, Moses W; Wan, Alice; Stewart, Sunita M; Yew, Carol; Lam, Tai-Hing; Chan, Sophia S

    2016-01-01

    Evidence-based practice and capacity-building approaches are essential for large-scale health promotion interventions. However, there are few models in the literature to guide and evaluate training of social service workers in community settings. This paper presents the development and evaluation of the "train-the-trainer" workshop (TTT) for the first large scale, community-based, family intervention projects, entitled "Happy Family Kitchen Project" (HFK) under the FAMILY project, a Hong Kong Jockey Club Initiative for a Harmonious Society. The workshop aimed to enhance social workers' competence and performance in applying positive psychology constructs in their family interventions under HFK to improve family well-being of the community they served. The two-day TTT was developed and implemented by a multidisciplinary team in partnership with community agencies to 50 social workers (64% women). It focused on the enhancement of knowledge, attitude, and practice of five specific positive psychology themes, which were the basis for the subsequent development of the 23 family interventions for 1419 participants. Acceptability and applicability were enhanced by completing a needs assessment prior to the training. The TTT was evaluated by trainees' reactions to the training content and design, changes in learners (trainees) and benefits to the service organizations. Focus group interviews to evaluate the workshop at three months after the training, and questionnaire survey at pre-training, immediately after, six months, one year and two years after training were conducted. There were statistically significant increases with large to moderate effect size in perceived knowledge, self-efficacy and practice after training, which sustained to 2-year follow-up. Furthermore, there were statistically significant improvements in family communication and well-being of the participants in the HFK interventions they implemented after training. This paper offers a practical example of development, implementation and model-based evaluation of training programs, which may be helpful to others seeking to develop such programs in diverse communities.

  18. The National Resuscitation Council, Singapore, and 34 years of resuscitation training: 1983 to 2017.

    PubMed

    Anantharaman, Venkataraman

    2017-07-01

    Training in the modern form of cardiopulmonary resuscitation (CPR) started in Singapore in 1983. For the first 15 years, the expansion of training programmes was mainly owing to the interest of a few individuals. Public training in the skill was minimal. In an area of medical care where the greatest opportunity for benefit lies in employing core resuscitation skills in the prehospital environment, very little was being done to address such a need. In 1998, a group of physicians, working together with the Ministry of Health, set up the National Resuscitation Council (NRC). Over the years, the NRC has created national guidelines on resuscitation and reviewed them at five-yearly intervals. Provider training manuals are now available for most programmes. The NRC has set up an active accreditation system for monitoring and maintaining standards of life support training. This has led to a large increase in the number of training centres, as well as recognition and adoption of the council's guidelines in the country. The NRC has also actively promoted the use of bystander CPR through community-based programmes, resulting in a rise in the number of certified providers. Improving the chain of survival, through active community-based training programmes, will likely lead to more lives being saved from sudden cardiac arrest. Copyright: © Singapore Medical Association.

  19. The National Resuscitation Council, Singapore, and 34 years of resuscitation training: 1983 to 2017

    PubMed Central

    Anantharaman, Venkataraman

    2017-01-01

    Training in the modern form of cardiopulmonary resuscitation (CPR) started in Singapore in 1983. For the first 15 years, the expansion of training programmes was mainly owing to the interest of a few individuals. Public training in the skill was minimal. In an area of medical care where the greatest opportunity for benefit lies in employing core resuscitation skills in the prehospital environment, very little was being done to address such a need. In 1998, a group of physicians, working together with the Ministry of Health, set up the National Resuscitation Council (NRC). Over the years, the NRC has created national guidelines on resuscitation and reviewed them at five-yearly intervals. Provider training manuals are now available for most programmes. The NRC has set up an active accreditation system for monitoring and maintaining standards of life support training. This has led to a large increase in the number of training centres, as well as recognition and adoption of the council’s guidelines in the country. The NRC has also actively promoted the use of bystander CPR through community-based programmes, resulting in a rise in the number of certified providers. Improving the chain of survival, through active community-based training programmes, will likely lead to more lives being saved from sudden cardiac arrest. PMID:28741008

  20. Peer-Assisted Learning in the Athletic Training Clinical Setting

    PubMed Central

    Henning, Jolene M; Weidner, Thomas G; Jones, James

    2006-01-01

    Context: Athletic training educators often anecdotally suggest that athletic training students enhance their learning by teaching their peers. However, peer-assisted learning (PAL) has not been examined within athletic training education in order to provide evidence for its current use or as a pedagogic tool. Objective: To describe the prevalence of PAL in athletic training clinical education and to identify students' perceptions of PAL. Design: Descriptive. Setting: “The Athletic Training Student Seminar” at the National Athletic Trainers' Association 2002 Annual Meeting and Clinical Symposia. Patients or Other Participants: A convenience sample of 138 entry-level male and female athletic training students. Main Outcome Measure(s): Students' perceptions regarding the prevalence and benefits of and preferences for PAL were measured using the Athletic Training Peer-Assisted Learning Assessment Survey. The Survey is a self-report tool with 4 items regarding the prevalence of PAL and 7 items regarding perceived benefits and preferences. Results: A total of 66% of participants practiced a moderate to large amount of their clinical skills with other athletic training students. Sixty percent of students reported feeling less anxious when performing clinical skills on patients in front of other athletic training students than in front of their clinical instructors. Chi-square analysis revealed that 91% of students enrolled in Commission on Accreditation of Allied Health Education Programs–accredited athletic training education programs learned a minimal to small amount of clinical skills from their peers compared with 65% of students in Joint Review Committee on Educational Programs in Athletic Training–candidacy schools (χ2 3 = 14.57, P < .01). Multiple analysis of variance revealed significant interactions between sex and academic level on several items regarding benefits and preferences. Conclusions: According to athletic training students, PAL is occurring in the athletic training clinical setting. Entry-level students are utilizing their peers as resources for practicing clinical skills and report benefiting from the collaboration. Educators should consider deliberately integrating PAL into athletic training education programs to enhance student learning and collaboration. PMID:16619102

  1. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    NASA Astrophysics Data System (ADS)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide process information. They fall into three basic patterns: a channelized end member, a sheet flow end member, and one intermediate case. These represent the continuum between autogenic bypass or erosion, and net deposition.

  2. Application of Deep Learning in GLOBELAND30-2010 Product Refinement

    NASA Astrophysics Data System (ADS)

    Liu, T.; Chen, X.

    2018-04-01

    GlobeLand30, as one of the best Global Land Cover (GLC) product at 30-m resolution, has been widely used in many research fields. Due to the significant spectral confusion among different land cover types and limited textual information of Landsat data, the overall accuracy of GlobeLand30 is about 80 %. Although such accuracy is much higher than most other global land cover products, it cannot satisfy various applications. There is still a great need of an effective method to improve the quality of GlobeLand30. The explosive high-resolution satellite images and remarkable performance of Deep Learning on image classification provide a new opportunity to refine GlobeLand30. However, the performance of deep leaning depends on quality and quantity of training samples as well as model training strategy. Therefore, this paper 1) proposed an automatic training sample generation method via Google earth to build a large training sample set; and 2) explore the best training strategy for land cover classification using GoogleNet (Inception V3), one of the most widely used deep learning network. The result shows that the fine-tuning from first layer of Inception V3 using rough large sample set is the best strategy. The retrained network was then applied in one selected area from Xi'an city as a case study of GlobeLand30 refinement. The experiment results indicate that the proposed approach with Deep Learning and google earth imagery is a promising solution for further improving accuracy of GlobeLand30.

  3. Psychological training of NASA astronauts for extended missions

    NASA Technical Reports Server (NTRS)

    Holland, A. W.

    1992-01-01

    The success of operational teams working in remote and hostile environments rests in large part on adequate preparation of those teams prior to emplacement in field settings. Psychological training, directed at the maintenance of crew health and performance becomes increasingly important as space missions grow in duration and complexity. Methods: Topics to be discussed include: the conceptual framework of psychological training; needs analysis; content and delivery options; methods of assessing training efficacy; use of testbeds and analogies and the relationship of training to crew selection and real-time support activities. Results and Conclusions: This paper will discuss the psychological training approach being developed at the NASA/JSC Behavior and Performance Laboratory. This approach will be compared and contrasted with those underway in the U.S. Department of Defense and in other space agencies.

  4. Association of high proliferation marker Ki-67 expression with DCEMR imaging features of breast: a large scale evaluation

    NASA Astrophysics Data System (ADS)

    Saha, Ashirbani; Harowicz, Michael R.; Grimm, Lars J.; Kim, Connie E.; Ghate, Sujata V.; Walsh, Ruth; Mazurowski, Maciej A.

    2018-02-01

    One of the methods widely used to measure the proliferative activity of cells in breast cancer patients is the immunohistochemical (IHC) measurement of the percentage of cells stained for nuclear antigen Ki-67. Use of Ki-67 expression as a prognostic marker is still under investigation. However, numerous clinical studies have reported an association between a high Ki-67 and overall survival (OS) and disease free survival (DFS). On the other hand, to offer non-invasive alternative in determining Ki-67 expression, researchers have made recent attempts to study the association of Ki-67 expression with magnetic resonance (MR) imaging features of breast cancer in small cohorts (<30). Here, we present a large scale evaluation of the relationship between imaging features and Ki-67 score as: (a) we used a set of 450 invasive breast cancer patients, (b) we extracted a set of 529 imaging features of shape and enhancement from breast, tumor and fibroglandular tissue of the patients, (c) used a subset of patients as the training set to select features and trained a multivariate logistic regression model to predict high versus low Ki-67 values, and (d) we validated the performance of the trained model in an independent test set using the area-under the receiver operating characteristics (ROC) curve (AUC) of the values predicted. Our model was able to predict high versus low Ki-67 in the test set with an AUC of 0.67 (95% CI: 0.58-0.75, p<1.1e-04). Thus, a moderate strength of association of Ki-67 values and MRextracted imaging features was demonstrated in our experiments.

  5. Virtual agents in a simulated virtual training environment

    NASA Technical Reports Server (NTRS)

    Achorn, Brett; Badler, Norman L.

    1993-01-01

    A drawback to live-action training simulations is the need to gather a large group of participants in order to train a few individuals. One solution to this difficulty is the use of computer-controlled agents in a virtual training environment. This allows a human participant to be replaced by a virtual, or simulated, agent when only limited responses are needed. Each agent possesses a specified set of behaviors and is capable of limited autonomous action in response to its environment or the direction of a human trainee. The paper describes these agents in the context of a simulated hostage rescue training session, involving two human rescuers assisted by three virtual (computer-controlled) agents and opposed by three other virtual agents.

  6. The Importance of Curriculum-Based Training and Assessment in Interventional Radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belli, Anna-Maria, E-mail: anna.belli@stgeorges.nhs.uk; Reekers, Jim A., E-mail: j.a.reekers@amc.uva.nl; Lee, Michael, E-mail: mlee@rcsi.ie

    Physician performance and outcomes are being scrutinised by health care providers to improve patient safety and cost efficiency. Patients are best served by physicians who have undergone appropriate specialist training and assessment and perform large numbers of cases to maintain their skills. The Cardiovascular and Interventional Radiological Society of Europe has put into place a curriculum for training in interventional radiology (IR) and a syllabus with an examination, the European Board of Interventional Radiology, providing evidence of attainment of an appropriate and satisfactory skill set for the safe practice of IR. This curriculum is appropriate for IR where there ismore » a high volume of image-guided procedures in vascular and nonvascular organ systems with cross-use of minimally invasive techniques in patients with a variety of disease processes. Other specialties may require different, longer, and more focused training if their experience is “diluted” by the need to master a different skill set.« less

  7. Development and Validation of Decision Forest Model for Estrogen Receptor Binding Prediction of Chemicals Using Large Data Sets.

    PubMed

    Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2015-12-21

    Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals.

  8. Choosing the Most Effective Pattern Classification Model under Learning-Time Constraint.

    PubMed

    Saito, Priscila T M; Nakamura, Rodrigo Y M; Amorim, Willian P; Papa, João P; de Rezende, Pedro J; Falcão, Alexandre X

    2015-01-01

    Nowadays, large datasets are common and demand faster and more effective pattern analysis techniques. However, methodologies to compare classifiers usually do not take into account the learning-time constraints required by applications. This work presents a methodology to compare classifiers with respect to their ability to learn from classification errors on a large learning set, within a given time limit. Faster techniques may acquire more training samples, but only when they are more effective will they achieve higher performance on unseen testing sets. We demonstrate this result using several techniques, multiple datasets, and typical learning-time limits required by applications.

  9. Increases in lower-body strength transfer positively to sprint performance: a systematic review with meta-analysis.

    PubMed

    Seitz, Laurent B; Reyes, Alvaro; Tran, Tai T; Saez de Villarreal, Eduardo; Haff, G Gregory

    2014-12-01

    Although lower-body strength is correlated with sprint performance, whether increases in lower-body strength transfer positively to sprint performance remain unclear. This meta-analysis determined whether increases in lower-body strength (measured with the free-weight back squat exercise) transfer positively to sprint performance, and identified the effects of various subject characteristics and resistance-training variables on the magnitude of sprint improvement. A computerized search was conducted in ADONIS, ERIC, SPORTDiscus, EBSCOhost, Google Scholar, MEDLINE and PubMed databases, and references of original studies and reviews were searched for further relevant studies. The analysis comprised 510 subjects and 85 effect sizes (ESs), nested with 26 experimental and 11 control groups and 15 studies. There is a transfer between increases in lower-body strength and sprint performance as indicated by a very large significant correlation (r = -0.77; p = 0.0001) between squat strength ES and sprint ES. Additionally, the magnitude of sprint improvement is affected by the level of practice (p = 0.03) and body mass (r = 0.35; p = 0.011) of the subject, the frequency of resistance-training sessions per week (r = 0.50; p = 0.001) and the rest interval between sets of resistance-training exercises (r = -0.47; p ≤ 0.001). Conversely, the magnitude of sprint improvement is not affected by the athlete's age (p = 0.86) and height (p = 0.08), the resistance-training methods used through the training intervention, (p = 0.06), average load intensity [% of 1 repetition maximum (RM)] used during the resistance-training sessions (p = 0.34), training program duration (p = 0.16), number of exercises per session (p = 0.16), number of sets per exercise (p = 0.06) and number of repetitions per set (p = 0.48). Increases in lower-body strength transfer positively to sprint performance. The magnitude of sprint improvement is affected by numerous subject characteristics and resistance-training variables, but the large difference in number of ESs available should be taken into consideration. Overall, the reported improvement in sprint performance (sprint ES = -0.87, mean sprint improvement = 3.11 %) resulting from resistance training is of practical relevance for coaches and athletes in sport activities requiring high levels of speed.

  10. Cultural competency training of GP Registrars-exploring the views of GP Supervisors.

    PubMed

    Watt, Kelly; Abbott, Penny; Reath, Jenny

    2015-10-06

    An equitable multicultural society requires General Practitioners (GPs) to be proficient in providing health care to patients from diverse backgrounds. This requires a certain set of attitudes, knowledge and skills known as cultural competence. While training in cultural competence is an important part of the Australian GP Registrar training curriculum, it is unclear who provides this training apart from in Aboriginal and Torres Strait Islander training posts. The majority of Australian GP Registrar training takes place in a workplace setting facilitated by the GP Supervisor. In view of the central role of GP Supervisors, their views on culturally competent practice, and their role in its development in Registrars, are important to ascertain. We conducted 14 semi-structured interviews with GP Supervisors. These were audiotaped, transcribed verbatim and thematically analyzed using an iterative approach. The Supervisors interviewed frequently viewed cultural competence as adequately covered by using patient-centered approaches. The Supervisor role in promoting cultural competence of Registrars was affirmed, though training was noted to occur opportunistically and focused largely on patient-centered care rather than health disparities. Formal training for both Registrars and Supervisors may be beneficial not only to develop a deeper understanding of cultural competence and its relevance to practice but also to promote more consistency in training from Supervisors in the area, particularly with respect to self-reflection, non-conscious bias and utilizing appropriate cultural knowledge without stereotyping and assumption-making.

  11. Specific physical trainability in elite young soccer players: efficiency over 6 weeks’ in-season training

    PubMed Central

    Rouissi, M; Haddad, M; Chtara, H; Chaalali, A; Owen, A; Chamari, K

    2017-01-01

    The aim of the present study was to compare the effects of 3 training protocols (plyometric [PLYO], agility [AG], or repeated shuttle sprints [RS]) on physical performance in the same population of young soccer players. Forty-two youth-level male players (13.6±0.3-years; 1.65±0.07 m; 54.1±6.5 kg; body fat: 12.8±2.6%) participated in a short-term (6-week) randomized parallel fully controlled training study (pre-to-post measurements): PLYO group, n=10; AG group, n=10; RS group, n=12; and control group [CON] n=10. PLYO training = 9 lower limb exercises (2-3 sets of 8-12 repetitions). The AG group performed planned AG drills and direction changes. RS training consisted of 2-4 sets of 5-6x 20 to 30 m shuttle sprints (20 seconds recovery in between). Progressive overload principles were incorporated into the programme by increasing the number of foot contacts and varying the complexity of the exercises. Pre/post-training tests were: bilateral standing horizontal jump, and unilateral horizontal jumps, sprint (30 m with 10 m lap time), agility (20 m zigzag), and repeated sprint ability (RSA) (i.e. 6x30 m shuttle sprints: 2x15 m with 180° turns). Significant main effects for time (i.e. training application) and group (training type) were detected. Improvements in horizontal jumping were higher (p<0.01: ES=large) in PLYO. The RS group improved significantly more (p<0.01; ES=large) than other groups: 30 m sprint, RSAbest and RSAmean performances. Significantly greater increases in 20 m zigzag performance were observed following AG and RS training (4.0 and 3.8%, respectively) compared with PLYO (2.0%) and CON training (0.8%). No significant differences were reported in the RSAdec between groups. Elite young male soccer players’ physical performances can be significantly and specifically improved either using PLYO or AG or RSA training over short-term in-season training. PMID:28566807

  12. Specific physical trainability in elite young soccer players: efficiency over 6 weeks' in-season training.

    PubMed

    Chtara, M; Rouissi, M; Haddad, M; Chtara, H; Chaalali, A; Owen, A; Chamari, K

    2017-06-01

    The aim of the present study was to compare the effects of 3 training protocols (plyometric [PLYO], agility [AG], or repeated shuttle sprints [RS]) on physical performance in the same population of young soccer players. Forty-two youth-level male players (13.6±0.3-years; 1.65±0.07 m; 54.1±6.5 kg; body fat: 12.8±2.6%) participated in a short-term (6-week) randomized parallel fully controlled training study (pre-to-post measurements): PLYO group, n=10; AG group, n=10; RS group, n=12; and control group [CON] n=10. PLYO training = 9 lower limb exercises (2-3 sets of 8-12 repetitions). The AG group performed planned AG drills and direction changes. RS training consisted of 2-4 sets of 5-6x 20 to 30 m shuttle sprints (20 seconds recovery in between). Progressive overload principles were incorporated into the programme by increasing the number of foot contacts and varying the complexity of the exercises. Pre/post-training tests were: bilateral standing horizontal jump, and unilateral horizontal jumps, sprint (30 m with 10 m lap time), agility (20 m zigzag), and repeated sprint ability (RSA) (i.e. 6x30 m shuttle sprints: 2x15 m with 180° turns). Significant main effects for time (i.e. training application) and group (training type) were detected. Improvements in horizontal jumping were higher (p<0.01: ES=large) in PLYO. The RS group improved significantly more (p<0.01; ES=large) than other groups: 30 m sprint, RSA best and RSA mean performances. Significantly greater increases in 20 m zigzag performance were observed following AG and RS training (4.0 and 3.8%, respectively) compared with PLYO (2.0%) and CON training (0.8%). No significant differences were reported in the RSA dec between groups. Elite young male soccer players' physical performances can be significantly and specifically improved either using PLYO or AG or RSA training over short-term in-season training.

  13. The effects of exercise on muscle strength, body composition, physical functioning and the inflammatory profile of older adults: a systematic review.

    PubMed

    Liberman, Keliane; Forti, Louis N; Beyer, Ingo; Bautmans, Ivan

    2017-01-01

    This systematic review reports the most recent literature regarding the effects of physical exercise on muscle strength, body composition, physical functioning and inflammation in older adults. All articles were assessed for methodological quality and where possible effect size was calculated. Thirty-four articles were included - four involving frail, 24 healthy and five older adults with a specific disease. One reported on both frail and nonfrail patients. Several types of exercise were used: resistance training, aerobic training, combined resistance training and aerobic training and others. In frail older persons, moderate-to-large beneficial exercise effects were noted on inflammation, muscle strength and physical functioning. In healthy older persons, effects of resistance training (most frequently investigated) on inflammation or muscle strength can be influenced by the exercise modalities (intensity and rest interval between sets). Muscle strength seemed the most frequently used outcome measure, with moderate-to-large effects obtained regardless the exercise intervention studied. Similar effects were found in patients with specific diseases. Exercise has moderate-to-large effects on muscle strength, body composition, physical functioning and inflammation in older adults. Future studies should focus on the influence of specific exercise modalities and target the frail population more.

  14. Training set selection for the prediction of essential genes.

    PubMed

    Cheng, Jian; Xu, Zhao; Wu, Wenwu; Zhao, Li; Li, Xiangchen; Liu, Yanlin; Tao, Shiheng

    2014-01-01

    Various computational models have been developed to transfer annotations of gene essentiality between organisms. However, despite the increasing number of microorganisms with well-characterized sets of essential genes, selection of appropriate training sets for predicting the essential genes of poorly-studied or newly sequenced organisms remains challenging. In this study, a machine learning approach was applied reciprocally to predict the essential genes in 21 microorganisms. Results showed that training set selection greatly influenced predictive accuracy. We determined four criteria for training set selection: (1) essential genes in the selected training set should be reliable; (2) the growth conditions in which essential genes are defined should be consistent in training and prediction sets; (3) species used as training set should be closely related to the target organism; and (4) organisms used as training and prediction sets should exhibit similar phenotypes or lifestyles. We then analyzed the performance of an incomplete training set and an integrated training set with multiple organisms. We found that the size of the training set should be at least 10% of the total genes to yield accurate predictions. Additionally, the integrated training sets exhibited remarkable increase in stability and accuracy compared with single sets. Finally, we compared the performance of the integrated training sets with the four criteria and with random selection. The results revealed that a rational selection of training sets based on our criteria yields better performance than random selection. Thus, our results provide empirical guidance on training set selection for the identification of essential genes on a genome-wide scale.

  15. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    PubMed

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  16. The effects of low-volume resistance training with and without advanced techniques in trained subjects.

    PubMed

    Gieβsing, Jùrgen; Fisher, James; Steele, James; Rothe, Frank; Raubold, Kristin; Eichmann, Björn

    2016-03-01

    This study examined low-volume resistance training (RT) in trained participants with and without advanced training methods. Trained participants (RT experience 4±3 years) were randomised to groups performing single-set RT: ssRM (N.=21) performing repetitions to self-determined repetition maximum (RM), ssMMF (N.=30) performing repetitions to momentary muscular failure (MMF), and ssRP (N.=28) performing repetitions to self-determined RM using a rest pause (RP) method. Each performed supervised RT twice/week for 10 weeks. Outcomes included maximal isometric strength and body composition using bioelectrical impedance analysis. The ssRM group did not significantly improve in any outcome. The ssMMF and ssRP groups both significantly improved strength (p < 0.05). Magnitude of changes using effect size (ES) was examined between groups. Strength ES's were considered large for ssMMF (0.91 to 1.57) and ranging small to large for ssRP (0.42 to 1.06). Body composition data revealed significant improvements (P<0.05) in muscle and fat mass and percentages for whole body, upper limbs and trunk for ssMMF, but only upper limbs for ssRP. Body composition ES's ranged moderate to large for ssMMF (0.56 to 1.27) and ranged small to moderate for ssRP (0.28 to 0.52). ssMMF also significantly improved (P<0.05) total abdominal fat and increased intracellular water with moderate ES's (-0.62 and 0.56, respectively). Training to self-determined RM is not efficacious for trained participants. Training to MMF produces greatest improvements in strength and body composition, however, RP style training does offer some benefit.

  17. A decentralized training algorithm for Echo State Networks in distributed big data applications.

    PubMed

    Scardapane, Simone; Wang, Dianhui; Panella, Massimo

    2016-06-01

    The current big data deluge requires innovative solutions for performing efficient inference on large, heterogeneous amounts of information. Apart from the known challenges deriving from high volume and velocity, real-world big data applications may impose additional technological constraints, including the need for a fully decentralized training architecture. While several alternatives exist for training feed-forward neural networks in such a distributed setting, less attention has been devoted to the case of decentralized training of recurrent neural networks (RNNs). In this paper, we propose such an algorithm for a class of RNNs known as Echo State Networks. The algorithm is based on the well-known Alternating Direction Method of Multipliers optimization procedure. It is formulated only in terms of local exchanges between neighboring agents, without reliance on a coordinating node. Additionally, it does not require the communication of training patterns, which is a crucial component in realistic big data implementations. Experimental results on large scale artificial datasets show that it compares favorably with a fully centralized implementation, in terms of speed, efficiency and generalization accuracy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. ReactionMap: an efficient atom-mapping algorithm for chemical reactions.

    PubMed

    Fooshee, David; Andronico, Alessio; Baldi, Pierre

    2013-11-25

    Large databases of chemical reactions provide new data-mining opportunities and challenges. Key challenges result from the imperfect quality of the data and the fact that many of these reactions are not properly balanced or atom-mapped. Here, we describe ReactionMap, an efficient atom-mapping algorithm. Our approach uses a combination of maximum common chemical subgraph search and minimization of an assignment cost function derived empirically from training data. We use a set of over 259,000 balanced atom-mapped reactions from the SPRESI commercial database to train the system, and we validate it on random sets of 1000 and 17,996 reactions sampled from this pool. These large test sets represent a broad range of chemical reaction types, and ReactionMap correctly maps about 99% of the atoms and about 96% of the reactions, with a mean time per mapping of 2 s. Most correctly mapped reactions are mapped with high confidence. Mapping accuracy compares favorably with ChemAxon's AutoMapper, versions 5 and 6.1, and the DREAM Web tool. These approaches correctly map 60.7%, 86.5%, and 90.3% of the reactions, respectively, on the same data set. A ReactionMap server is available on the ChemDB Web portal at http://cdb.ics.uci.edu .

  19. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    PubMed

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  20. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    NASA Technical Reports Server (NTRS)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  1. Prevention and early intervention for behaviour problems in children with developmental disabilities.

    PubMed

    Einfeld, Stewart L; Tonge, Bruce J; Clarke, Kristina S

    2013-05-01

    To review the recent evidence regarding early intervention and prevention studies for children with developmental disabilities and behaviour problems from 2011 to 2013. Recent advances in the field are discussed and important areas for future research are highlighted. Recent reviews and studies highlight the utility of antecedent interventions and skills training interventions for reducing behaviour problems. There is preliminary evidence for the effectiveness of parent training interventions when delivered in minimally sufficient formats or in clinical settings. Two recent studies have demonstrated the utility of behavioural interventions for children with genetic causes of disability. Various forms of behavioural and parent training interventions are effective at reducing the behaviour problems in children with developmental disabilities. However, research on prevention and early intervention continues to be relatively scarce. Further large-scale dissemination studies and effectiveness studies in clinical or applied settings are needed.

  2. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  3. Training and Assessment of Hysteroscopic Skills: A Systematic Review.

    PubMed

    Savran, Mona Meral; Sørensen, Stine Maya Dreier; Konge, Lars; Tolsgaard, Martin G; Bjerrum, Flemming

    2016-01-01

    The aim of this systematic review was to identify studies on hysteroscopic training and assessment. PubMed, Excerpta Medica, the Cochrane Library, and Web of Science were searched in January 2015. Manual screening of references and citation tracking were also performed. Studies on hysteroscopic educational interventions were selected without restrictions on study design, populations, language, or publication year. A qualitative data synthesis including the setting, study participants, training model, training characteristics, hysteroscopic skills, assessment parameters, and study outcomes was performed by 2 authors working independently. Effect sizes were calculated when possible. Overall, 2 raters independently evaluated sources of validity evidence supporting the outcomes of the hysteroscopy assessment tools. A total of 25 studies on hysteroscopy training were identified, of which 23 were performed in simulated settings. Overall, 10 studies used virtual-reality simulators and reported effect sizes for technical skills ranging from 0.31 to 2.65; 12 used inanimate models and reported effect sizes for technical skills ranging from 0.35 to 3.19. One study involved live animal models; 2 studies were performed in clinical settings. The validity evidence supporting the assessment tools used was low. Consensus between the 2 raters on the reported validity evidence was high (94%). This systematic review demonstrated large variations in the effect of different tools for hysteroscopy training. The validity evidence supporting the assessment of hysteroscopic skills was limited. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  4. Unsupervised progressive elastic band exercises for frail geriatric inpatients objectively monitored by new exercise-integrated technology-a feasibility trial with an embedded qualitative study.

    PubMed

    Rathleff, C R; Bandholm, T; Spaich, E G; Jorgensen, M; Andreasen, J

    2017-01-01

    Frailty is a serious condition frequently present in geriatric inpatients that potentially causes serious adverse events. Strength training is acknowledged as a means of preventing or delaying frailty and loss of function in these patients. However, limited hospital resources challenge the amount of supervised training, and unsupervised training could possibly supplement supervised training thereby increasing the total exercise dose during admission. A new valid and reliable technology, the BandCizer, objectively measures the exact training dosage performed. The purpose was to investigate feasibility and acceptability of an unsupervised progressive strength training intervention monitored by BandCizer for frail geriatric inpatients. This feasibility trial included 15 frail inpatients at a geriatric ward. At hospitalization, the patients were prescribed two elastic band exercises to be performed unsupervised once daily. A BandCizer Datalogger enabling measurement of the number of sets, repetitions, and time-under-tension was attached to the elastic band. The patients were instructed in performing strength training: 3 sets of 10 repetitions (10-12 repetition maximum (RM)) with a separation of 2-min pauses and a time-under-tension of 8 s. The feasibility criterion for the unsupervised progressive exercises was that 33% of the recommended number of sets would be performed by at least 30% of patients. In addition, patients and staff were interviewed about their experiences with the intervention. Four (27%) out of 15 patients completed 33% of the recommended number of sets. For the total sample, the average percent of performed sets was 23% and for those who actually trained ( n  = 12) 26%. Patients and staff expressed a general positive attitude towards the unsupervised training as an addition to the supervised training sessions. However, barriers were also described-especially constant interruptions. Based on the predefined criterion for feasibility, the unsupervised training was not feasible, although the criterion was almost met. The patients and staff mainly expressed positive attitudes towards the unsupervised training. As even a small training dosage has been shown to improve the physical performance of geriatric inpatients, the proposed intervention might be relevant if the interruptions are decreased in future large-scale trials and if the adherence is increased. ClinicalTrials.gov: NCT02702557, February 29, 2016. Data Protection Agency: 2016-42, February 25, 2016. Ethics Committee: No registration needed, December 8, 2015 (e-mail correspondence).

  5. A Systematic Review of Biopsychosocial Training Programs for the Self-Management of Emotional Stress: Potential Applications for the Military

    PubMed Central

    Clausen, Shawn S.; Jonas, Wayne B.; Walter, Joan A. G.

    2013-01-01

    Combat-exposed troops and their family members are at risk for stress reactions and related disorders. Multimodal biopsychosocial training programs incorporating complementary and alternative self-management techniques have the potential to reduce stress-related symptoms and dysfunction. Such training can preempt or attenuate the posttraumatic stress response and may be effectively incorporated into the training cycle for deploying and redeploying troops and their families. A large systematic review was conducted to survey the literature on multimodal training programs for the self-management of emotional stress. This report is an overview of the randomized controlled trials (RCTs) identified in this systematic review. Select programs such as mindfulness-Based Stress Reduction, Cognitive Behavioral Stress Management, Autogenic Training, Relaxation Response Training, and other meditation and mind-body skills practices are highlighted, and the feasibility of their implementation within military settings is addressed. PMID:24174982

  6. A systematic review of biopsychosocial training programs for the self-management of emotional stress: potential applications for the military.

    PubMed

    Crawford, Cindy; Wallerstedt, Dawn B; Khorsan, Raheleh; Clausen, Shawn S; Jonas, Wayne B; Walter, Joan A G

    2013-01-01

    Combat-exposed troops and their family members are at risk for stress reactions and related disorders. Multimodal biopsychosocial training programs incorporating complementary and alternative self-management techniques have the potential to reduce stress-related symptoms and dysfunction. Such training can preempt or attenuate the posttraumatic stress response and may be effectively incorporated into the training cycle for deploying and redeploying troops and their families. A large systematic review was conducted to survey the literature on multimodal training programs for the self-management of emotional stress. This report is an overview of the randomized controlled trials (RCTs) identified in this systematic review. Select programs such as mindfulness-Based Stress Reduction, Cognitive Behavioral Stress Management, Autogenic Training, Relaxation Response Training, and other meditation and mind-body skills practices are highlighted, and the feasibility of their implementation within military settings is addressed.

  7. Snorkel: Rapid Training Data Creation with Weak Supervision

    PubMed Central

    Ratner, Alexander; Bach, Stephen H.; Ehrenberg, Henry; Fries, Jason; Wu, Sen; Ré, Christopher

    2018-01-01

    Labeling training data is increasingly the largest bottleneck in deploying machine learning systems. We present Snorkel, a first-of-its-kind system that enables users to train state-of- the-art models without hand labeling any training data. Instead, users write labeling functions that express arbitrary heuristics, which can have unknown accuracies and correlations. Snorkel denoises their outputs without access to ground truth by incorporating the first end-to-end implementation of our recently proposed machine learning paradigm, data programming. We present a flexible interface layer for writing labeling functions based on our experience over the past year collaborating with companies, agencies, and research labs. In a user study, subject matter experts build models 2.8× faster and increase predictive performance an average 45.5% versus seven hours of hand labeling. We study the modeling tradeoffs in this new setting and propose an optimizer for automating tradeoff decisions that gives up to 1.8× speedup per pipeline execution. In two collaborations, with the U.S. Department of Veterans Affairs and the U.S. Food and Drug Administration, and on four open-source text and image data sets representative of other deployments, Snorkel provides 132% average improvements to predictive performance over prior heuristic approaches and comes within an average 3.60% of the predictive performance of large hand-curated training sets. PMID:29770249

  8. The effect of a workplace violence training program for generalist nurses in the acute hospital setting: A quasi-experimental study.

    PubMed

    Lamont, Scott; Brunero, Scott

    2018-05-19

    Workplace violence prevalence has attracted significant attention within the international nursing literature. Little attention to non-mental health settings and a lack of evaluation rigor have been identified within review literature. To examine the effects of a workplace violence training program in relation to risk assessment and management practices, de-escalation skills, breakaway techniques, and confidence levels, within an acute hospital setting. A quasi-experimental study of nurses using pretest-posttest measurements of educational objectives and confidence levels, with two week follow-up. A 440 bed metropolitan tertiary referral hospital in Sydney, Australia. Nurses working in specialties identified as a 'high risk' for violence. A pre-post-test design was used with participants attending a one day workshop. The workshop evaluation comprised the use of two validated questionnaires: the Continuing Professional Development Reaction questionnaire, and the Confidence in Coping with Patient Aggression Instrument. Descriptive and inferential statistics were calculated. The paired t-test was used to assess the statistical significance of changes in the clinical behaviour intention and confidence scores from pre- to post-intervention. Cohen's d effect sizes were calculated to determine the extent of the significant results. Seventy-eight participants completed both pre- and post-workshop evaluation questionnaires. Statistically significant increases in behaviour intention scores were found in fourteen of the fifteen constructs relating to the three broad workshop objectives, and confidence ratings, with medium to large effect sizes observed in some constructs. A significant increase in overall confidence in coping with patient aggression was also found post-test with large effect size. Positive results were observed from the workplace violence training. Training needs to be complimented by a multi-faceted organisational approach which includes governance, quality and review processes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. A new biologic prognostic model based on immunohistochemistry predicts survival in patients with diffuse large B-cell lymphoma.

    PubMed

    Perry, Anamarija M; Cardesa-Salzmann, Teresa M; Meyer, Paul N; Colomo, Luis; Smith, Lynette M; Fu, Kai; Greiner, Timothy C; Delabie, Jan; Gascoyne, Randy D; Rimsza, Lisa; Jaffe, Elaine S; Ott, German; Rosenwald, Andreas; Braziel, Rita M; Tubbs, Raymond; Cook, James R; Staudt, Louis M; Connors, Joseph M; Sehn, Laurie H; Vose, Julie M; López-Guillermo, Armando; Campo, Elias; Chan, Wing C; Weisenburger, Dennis D

    2012-09-13

    Biologic factors that predict the survival of patients with a diffuse large B-cell lymphoma, such as cell of origin and stromal signatures, have been discovered by gene expression profiling. We attempted to simulate these gene expression profiling findings and create a new biologic prognostic model based on immunohistochemistry. We studied 199 patients (125 in the training set, 74 in the validation set) with de novo diffuse large B-cell lymphoma treated with rituximab and CHOP (cyclophosphamide, doxorubicin, vincristine, and prednisone) or CHOP-like therapies, and immunohistochemical stains were performed on paraffin-embedded tissue microarrays. In the model, 1 point was awarded for each adverse prognostic factor: nongerminal center B cell-like subtype, SPARC (secreted protein, acidic, and rich in cysteine) < 5%, and microvascular density quartile 4. The model using these 3 biologic markers was highly predictive of overall survival and event-free survival in multivariate analysis after adjusting for the International Prognostic Index in both the training and validation sets. This new model delineates 2 groups of patients, 1 with a low biologic score (0-1) and good survival and the other with a high score (2-3) and poor survival. This new biologic prognostic model could be used with the International Prognostic Index to stratify patients for novel or risk-adapted therapies.

  10. Investigation into the efficacy of generating synthetic pathological oscillations for domain adaptation

    NASA Astrophysics Data System (ADS)

    Lewis, Rory; Ellenberger, James; Williams, Colton; White, Andrew M.

    2013-11-01

    In the ongoing investigation of integrating Knowledge Discovery in Databases (KDD) into neuroscience, we present a paper that facilitates overcoming the two challenges preventing this integration. Pathological oscillations found in the human brain are difficult to evaluate because 1) there is often no time to learn and train off of the same distribution in the fatally sick, and 2) sinusoidal signals found in the human brain are complex and transient in nature requiring large data sets to work with which are costly and often very expensive or impossible to acquire. Overcoming these challenges in today's neuro-intensive-care unit (ICU) requires insurmountable resources. For these reasons, optimizing KDD for pathological oscillations so machine learning systems can predict neuropathological states would be of immense value. Domain adaptation, which allows a way of predicting on a separate set of data than the training data, can theoretically overcome the first challenge. However, the challenge of acquiring large data sets that show whether domain adaptation is a good candidate to test in a live neuro ICU remains a challenge. To solve this conundrum, we present a methodology for generating synthesized neuropathological oscillations for domain adaptation.

  11. Toward accelerating landslide mapping with interactive machine learning techniques

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne

    2013-04-01

    Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also included an experimental evaluation of the uncertainties of manual mappings from multiple experts and demonstrated strong relationships between the uncertainty of the experts and the machine learning model.

  12. Influence relevance voting: an accurate and interpretable virtual high throughput screening method.

    PubMed

    Swamidass, S Joshua; Azencott, Chloé-Agathe; Lin, Ting-Wan; Gramajo, Hugo; Tsai, Shiou-Chuan; Baldi, Pierre

    2009-04-01

    Given activity training data from high-throughput screening (HTS) experiments, virtual high-throughput screening (vHTS) methods aim to predict in silico the activity of untested chemicals. We present a novel method, the Influence Relevance Voter (IRV), specifically tailored for the vHTS task. The IRV is a low-parameter neural network which refines a k-nearest neighbor classifier by nonlinearly combining the influences of a chemical's neighbors in the training set. Influences are decomposed, also nonlinearly, into a relevance component and a vote component. The IRV is benchmarked using the data and rules of two large, open, competitions, and its performance compared to the performance of other participating methods, as well as of an in-house support vector machine (SVM) method. On these benchmark data sets, IRV achieves state-of-the-art results, comparable to the SVM in one case, and significantly better than the SVM in the other, retrieving three times as many actives in the top 1% of its prediction-sorted list. The IRV presents several other important advantages over SVMs and other methods: (1) the output predictions have a probabilistic semantic; (2) the underlying inferences are interpretable; (3) the training time is very short, on the order of minutes even for very large data sets; (4) the risk of overfitting is minimal, due to the small number of free parameters; and (5) additional information can easily be incorporated into the IRV architecture. Combined with its performance, these qualities make the IRV particularly well suited for vHTS.

  13. JELC-LITE: Unconventional Instructional Design for Special Operations Training

    NASA Technical Reports Server (NTRS)

    Friedman, Mark

    2012-01-01

    Current special operations staff training is based on the Joint Event Life Cycle (JELC). It addresses operational level tasks in multi-week, live military exercises which are planned over a 12 to 18 month timeframe. As the military experiences changing global mission sets, shorter training events using distributed technologies will increasingly be needed to augment traditional training. JELC-Lite is a new approach for providing relevant training between large scale exercises. This new streamlined, responsive training model uses distributed and virtualized training technologies to establish simulated scenarios. It keeps proficiency levels closer to optimal levels -- thereby reducing the performance degradation inherent in periodic training. It can be delivered to military as well as under-reached interagency groups to facilitate agile, repetitive training events. JELC-Lite is described by four phases paralleling the JELC, differing mostly in scope and scale. It has been successfully used with a Theater Special Operations Command and fits well within the current environment of reduced personnel and financial resources.

  14. Preparing Experienced Elementary Teachers as Mathematics Specialists

    ERIC Educational Resources Information Center

    Nickerson, Susan D.

    2010-01-01

    High quality teaching is critical to student learning, yet takes considerable time to develop in particular content areas. Students in high-poverty, urban settings are less likely to encounter experienced and trained teachers. Administrators from a large school district and university mathematics education faculty partnered and attempted to…

  15. A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy

    PubMed Central

    Wen, Hui; Xie, Weixin; Pei, Jihong

    2016-01-01

    This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737

  16. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  17. Parameter calibration for synthesizing realistic-looking variability in offline handwriting

    NASA Astrophysics Data System (ADS)

    Cheng, Wen; Lopresti, Dan

    2011-01-01

    Motivated by the widely accepted principle that the more training data, the better a recognition system performs, we conducted experiments asking human subjects to do evaluate a mixture of real English handwritten text lines and text lines altered from existing handwriting with various distortion degrees. The idea of generating synthetic handwriting is based on a perturbation method by T. Varga and H. Bunke that distorts an entire text line. There are two purposes of our experiments. First, we want to calibrate distortion parameter settings for Varga and Bunke's perturbation model. Second, we intend to compare the effects of parameter settings on different writing styles: block, cursive and mixed. From the preliminary experimental results, we determined appropriate ranges for parameter amplitude, and found that parameter settings should be altered for different handwriting styles. With the proper parameter settings, it should be possible to generate large amount of training and testing data for building better off-line handwriting recognition systems.

  18. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  19. Toward open set recognition.

    PubMed

    Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E

    2013-07-01

    To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.

  20. Classification of urine sediment based on convolution neural network

    NASA Astrophysics Data System (ADS)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  1. Discriminative motif optimization based on perceptron training

    PubMed Central

    Patel, Ronak Y.; Stormo, Gary D.

    2014-01-01

    Motivation: Generating accurate transcription factor (TF) binding site motifs from data generated using the next-generation sequencing, especially ChIP-seq, is challenging. The challenge arises because a typical experiment reports a large number of sequences bound by a TF, and the length of each sequence is relatively long. Most traditional motif finders are slow in handling such enormous amount of data. To overcome this limitation, tools have been developed that compromise accuracy with speed by using heuristic discrete search strategies or limited optimization of identified seed motifs. However, such strategies may not fully use the information in input sequences to generate motifs. Such motifs often form good seeds and can be further improved with appropriate scoring functions and rapid optimization. Results: We report a tool named discriminative motif optimizer (DiMO). DiMO takes a seed motif along with a positive and a negative database and improves the motif based on a discriminative strategy. We use area under receiver-operating characteristic curve (AUC) as a measure of discriminating power of motifs and a strategy based on perceptron training that maximizes AUC rapidly in a discriminative manner. Using DiMO, on a large test set of 87 TFs from human, drosophila and yeast, we show that it is possible to significantly improve motifs identified by nine motif finders. The motifs are generated/optimized using training sets and evaluated on test sets. The AUC is improved for almost 90% of the TFs on test sets and the magnitude of increase is up to 39%. Availability and implementation: DiMO is available at http://stormo.wustl.edu/DiMO Contact: rpatel@genetics.wustl.edu, ronakypatel@gmail.com PMID:24369152

  2. MABAL: a Novel Deep-Learning Architecture for Machine-Assisted Bone Age Labeling.

    PubMed

    Mutasa, Simukayi; Chang, Peter D; Ruzal-Shapiro, Carrie; Ayyala, Rama

    2018-02-05

    Bone age assessment (BAA) is a commonly performed diagnostic study in pediatric radiology to assess skeletal maturity. The most commonly utilized method for assessment of BAA is the Greulich and Pyle method (Pediatr Radiol 46.9:1269-1274, 2016; Arch Dis Child 81.2:172-173, 1999) atlas. The evaluation of BAA can be a tedious and time-consuming process for the radiologist. As such, several computer-assisted detection/diagnosis (CAD) methods have been proposed for automation of BAA. Classical CAD tools have traditionally relied on hard-coded algorithmic features for BAA which suffer from a variety of drawbacks. Recently, the advent and proliferation of convolutional neural networks (CNNs) has shown promise in a variety of medical imaging applications. There have been at least two published applications of using deep learning for evaluation of bone age (Med Image Anal 36:41-51, 2017; JDI 1-5, 2017). However, current implementations are limited by a combination of both architecture design and relatively small datasets. The purpose of this study is to demonstrate the benefits of a customized neural network algorithm carefully calibrated to the evaluation of bone age utilizing a relatively large institutional dataset. In doing so, this study will aim to show that advanced architectures can be successfully trained from scratch in the medical imaging domain and can generate results that outperform any existing proposed algorithm. The training data consisted of 10,289 images of different skeletal age examinations, 8909 from the hospital Picture Archiving and Communication System at our institution and 1383 from the public Digital Hand Atlas Database. The data was separated into four cohorts, one each for male and female children above the age of 8, and one each for male and female children below the age of 10. The testing set consisted of 20 radiographs of each 1-year-age cohort from 0 to 1 years to 14-15+ years, half male and half female. The testing set included left-hand radiographs done for bone age assessment, trauma evaluation without significant findings, and skeletal surveys. A 14 hidden layer-customized neural network was designed for this study. The network included several state of the art techniques including residual-style connections, inception layers, and spatial transformer layers. Data augmentation was applied to the network inputs to prevent overfitting. A linear regression output was utilized. Mean square error was used as the network loss function and mean absolute error (MAE) was utilized as the primary performance metric. MAE accuracies on the validation and test sets for young females were 0.654 and 0.561 respectively. For older females, validation and test accuracies were 0.662 and 0.497 respectively. For young males, validation and test accuracies were 0.649 and 0.585 respectively. Finally, for older males, validation and test set accuracies were 0.581 and 0.501 respectively. The female cohorts were trained for 900 epochs each and the male cohorts were trained for 600 epochs. An eightfold cross-validation set was employed for hyperparameter tuning. Test error was obtained after training on a full data set with the selected hyperparameters. Using our proposed customized neural network architecture on our large available data, we achieved an aggregate validation and test set mean absolute errors of 0.637 and 0.536 respectively. To date, this is the best published performance on utilizing deep learning for bone age assessment. Our results support our initial hypothesis that customized, purpose-built neural networks provide improved performance over networks derived from pre-trained imaging data sets. We build on that initial work by showing that the addition of state-of-the-art techniques such as residual connections and inception architecture further improves prediction accuracy. This is important because the current assumption for use of residual and/or inception architectures is that a large pre-trained network is required for successful implementation given the relatively small datasets in medical imaging. Instead we show that a small, customized architecture incorporating advanced CNN strategies can indeed be trained from scratch, yielding significant improvements in algorithm accuracy. It should be noted that for all four cohorts, testing error outperformed validation error. One reason for this is that our ground truth for our test set was obtained by averaging two pediatric radiologist reads compared to our training data for which only a single read was used. This suggests that despite relatively noisy training data, the algorithm could successfully model the variation between observers and generate estimates that are close to the expected ground truth.

  3. Less is more: Patient-level meta-analysis reveals paradoxical dose-response effects of a computer-based social anxiety intervention targeting attentional bias.

    PubMed

    Price, Rebecca B; Kuckertz, Jennie M; Amir, Nader; Bar-Haim, Yair; Carlbring, Per; Wallace, Meredith L

    2017-12-01

    The past decade of research has seen considerable interest in computer-based approaches designed to directly target cognitive mechanisms of anxiety, such as attention bias modification (ABM). By pooling patient-level datasets from randomized controlled trials of ABM that utilized a dot-probe training procedure, we assessed the impact of training "dose" on relevant outcomes among a pooled sample of 693 socially anxious adults. A paradoxical effect of the number of training trials administered was observed for both posttraining social anxiety symptoms and behavioral attentional bias (AB) toward threat (the target mechanism of ABM). Studies administering a large (>1,280) number of training trials showed no benefit of ABM over control conditions, while those administering fewer training trials showed significant benefit for ABM in reducing social anxiety (P = .02). These moderating effects of dose were not better explained by other examined variables and previously identified moderators, including patient age, training setting (laboratory vs. home), or type of anxiety assessment (clinician vs. self-report). Findings inform the optimal dosing for future dot-probe style ABM applications in both research and clinical settings, and suggest several novel avenues for further research. © 2017 Wiley Periodicals, Inc.

  4. Improving communication in general practice when mental health issues appear: piloting a set of six evidence-based skills.

    PubMed

    Stensrud, Tonje Lauritzen; Gulbrandsen, Pål; Mjaaland, Trond Arne; Skretting, Sidsel; Finset, Arnstein

    2014-04-01

    To test a communication skills training program teaching general practitioners (GPs) a set of six evidence-based mental health related skills. A training program was developed and tested in a pilot test-retest study with 21 GPs. Consultations were videotaped and actors used as patients. A coding scheme was created to assess the effect of training on GP behavior. Relevant utterances were categorized as examples of each of the six specified skills. The GPs' self-perceived learning needs and self-efficacy were measured with questionnaires. The mean number of GP utterances related to the six skills increased from 13.3 (SD 6.2) utterances before to 23.6 (SD 7.2) utterances after training; an increase of 77.4% (P<0.001). Effect sizes varied from 0.23 to 1.37. Skills exploring emotions, cognitions and resources, and the skill Promote coping, increased significantly. Self-perceived learning needs and self-efficacy did not change significantly. The results from this pilot test are encouraging. GPs enhanced their use on four out of six mental health related communication skills significantly, and the effects were medium to large. This training approach appears to be an efficacious approach to mental health related communication skills training in general practice. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Coarse-coded higher-order neural networks for PSRI object recognition. [position, scale, and rotation invariant

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1993-01-01

    A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.

  6. Self-Estimation of Blood Alcohol Concentration: A Review

    PubMed Central

    Aston, Elizabeth R.; Liguori, Anthony

    2013-01-01

    This article reviews the history of blood alcohol concentration (BAC) estimation training, which trains drinkers to discriminate distinct BAC levels and thus avoid excessive alcohol consumption. BAC estimation training typically combines education concerning alcohol metabolism with attention to subjective internal cues associated with specific concentrations. Estimation training was originally conceived as a component of controlled drinking programs. However, dependent drinkers were unsuccessful in BAC estimation, likely due to extreme tolerance. In contrast, moderate drinkers successfully acquired this ability. A subsequent line of research translated laboratory estimation studies to naturalistic settings by studying large samples of drinkers in their preferred drinking environments. Thus far, naturalistic studies have provided mixed results regarding the most effective form of BAC feedback. BAC estimation training is important because it imparts an ability to perceive individualized impairment that may be present below the legal limit for driving. Consequently, the training can be a useful component for moderate drinkers in drunk driving prevention programs. PMID:23380489

  7. Evaluation of an avatar-based training program to promote suicide prevention awareness in a college setting.

    PubMed

    Rein, Benjamin A; McNeil, Daniel W; Hayes, Allison R; Hawkins, T Anne; Ng, H Mei; Yura, Catherine A

    2018-07-01

    Training programs exist that prepare college students, faculty, and staff to identify and support students potentially at risk for suicide. Kognito is an online program that trains users through simulated interactions with virtual humans. This study evaluated Kognito's effectiveness in preparing users to intervene with at-risk students. Training was completed by 2,727 university students, faculty, and staff from April, 2014 through September, 2015. Voluntary and mandatory participants at a land-grant university completed Kognito modules designed for higher education, along with pre- and post-assessments. All modules produced significant gains in reported Preparedness, Likelihood, and Self-Efficacy in intervening with troubled students. Despite initial disparities in reported abilities, after training participants reported being similarly capable of assisting at-risk students, including LGBTQ and veteran students. Kognito training appears to be effective, on a large scale, in educating users to act in a facilitative role for at-risk college students.

  8. Accurate, Rapid Taxonomic Classification of Fungal Large-Subunit rRNA Genes

    PubMed Central

    Liu, Kuan-Liang; Porras-Alfaro, Andrea; Eichorst, Stephanie A.

    2012-01-01

    Taxonomic and phylogenetic fingerprinting based on sequence analysis of gene fragments from the large-subunit rRNA (LSU) gene or the internal transcribed spacer (ITS) region is becoming an integral part of fungal classification. The lack of an accurate and robust classification tool trained by a validated sequence database for taxonomic placement of fungal LSU genes is a severe limitation in taxonomic analysis of fungal isolates or large data sets obtained from environmental surveys. Using a hand-curated set of 8,506 fungal LSU gene fragments, we determined the performance characteristics of a naïve Bayesian classifier across multiple taxonomic levels and compared the classifier performance to that of a sequence similarity-based (BLASTN) approach. The naïve Bayesian classifier was computationally more rapid (>460-fold with our system) than the BLASTN approach, and it provided equal or superior classification accuracy. Classifier accuracies were compared using sequence fragments of 100 bp and 400 bp and two different PCR primer anchor points to mimic sequence read lengths commonly obtained using current high-throughput sequencing technologies. Accuracy was higher with 400-bp sequence reads than with 100-bp reads. It was also significantly affected by sequence location across the 1,400-bp test region. The highest accuracy was obtained across either the D1 or D2 variable region. The naïve Bayesian classifier provides an effective and rapid means to classify fungal LSU sequences from large environmental surveys. The training set and tool are publicly available through the Ribosomal Database Project (http://rdp.cme.msu.edu/classifier/classifier.jsp). PMID:22194300

  9. English for Driving--Student Workbook.

    ERIC Educational Resources Information Center

    Anderson, R. Bryan

    Intended for use in conjunction with an accompanying teacher's guide and set of visuals, this workbook is in large part a picture dictionary of driving vocabulary with practice exercises to help prepare non-native speakers of English for driver training class. Topics covered in the workbook are automobiles, directions in an automobile, signals,…

  10. Effect of missing data on multitask prediction methods.

    PubMed

    de la Vega de León, Antonio; Chen, Beining; Gillet, Valerie J

    2018-05-22

    There has been a growing interest in multitask prediction in chemoinformatics, helped by the increasing use of deep neural networks in this field. This technique is applied to multitarget data sets, where compounds have been tested against different targets, with the aim of developing models to predict a profile of biological activities for a given compound. However, multitarget data sets tend to be sparse; i.e., not all compound-target combinations have experimental values. There has been little research on the effect of missing data on the performance of multitask methods. We have used two complete data sets to simulate sparseness by removing data from the training set. Different models to remove the data were compared. These sparse sets were used to train two different multitask methods, deep neural networks and Macau, which is a Bayesian probabilistic matrix factorization technique. Results from both methods were remarkably similar and showed that the performance decrease because of missing data is at first small before accelerating after large amounts of data are removed. This work provides a first approximation to assess how much data is required to produce good performance in multitask prediction exercises.

  11. Cognitive flexibility modulates maturation and music-training-related changes in neural sound discrimination.

    PubMed

    Saarikivi, Katri; Putkinen, Vesa; Tervaniemi, Mari; Huotilainen, Minna

    2016-07-01

    Previous research has demonstrated that musicians show superior neural sound discrimination when compared to non-musicians, and that these changes emerge with accumulation of training. Our aim was to investigate whether individual differences in executive functions predict training-related changes in neural sound discrimination. We measured event-related potentials induced by sound changes coupled with tests for executive functions in musically trained and non-trained children aged 9-11 years and 13-15 years. High performance in a set-shifting task, indexing cognitive flexibility, was linked to enhanced maturation of neural sound discrimination in both musically trained and non-trained children. Specifically, well-performing musically trained children already showed large mismatch negativity (MMN) responses at a young age as well as at an older age, indicating accurate sound discrimination. In contrast, the musically trained low-performing children still showed an increase in MMN amplitude with age, suggesting that they were behind their high-performing peers in the development of sound discrimination. In the non-trained group, in turn, only the high-performing children showed evidence of an age-related increase in MMN amplitude, and the low-performing children showed a small MMN with no age-related change. These latter results suggest an advantage in MMN development also for high-performing non-trained individuals. For the P3a amplitude, there was an age-related increase only in the children who performed well in the set-shifting task, irrespective of music training, indicating enhanced attention-related processes in these children. Thus, the current study provides the first evidence that, in children, cognitive flexibility may influence age-related and training-related plasticity of neural sound discrimination. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Designing and evaluating Brain Powered Games for cognitive training and rehabilitation in at-risk African children.

    PubMed

    Giordani, B; Novak, B; Sikorskii, A; Bangirana, P; Nakasujja, N; Winn, B M; Boivin, M J

    2015-01-01

    Valid, reliable, accessible, and cost-effective computer-training approaches can be important components in scaling up educational support across resource-poor settings, such as sub-Saharan Africa. The goal of the current study was to develop a computer-based training platform, the Michigan State University Games for Entertainment and Learning laboratory's Brain Powered Games (BPG) package that would be suitable for use with at-risk children within a rural Ugandan context and then complete an initial field trial of that package. After game development was completed with the use of local stimuli and sounds to match the context of the games as closely as possible to the rural Ugandan setting, an initial field study was completed with 33 children (mean age = 8.55 ± 2.29 years, range 6-12 years of age) with HIV in rural Uganda. The Test of Variables of Attention (TOVA), CogState computer battery, and the Non-Verbal Index from the Kaufman Assessment Battery for Children, 2nd edition (KABC-II) were chosen as the outcome measures for pre- and post-intervention testing. The children received approximately 45 min of BPG training several days per week for 2 months (24 sessions). Although some improvements in test scores were evident prior to BPG training, following training, children demonstrated clinically significant changes (significant repeated-measures outcomes with moderate to large effect sizes) on specific TOVA and CogState measures reflecting processing speed, attention, visual-motor coordination, maze learning, and problem solving. Results provide preliminary support for the acceptability, feasibility, and neurocognitive benefit of BPG and its utility as a model platform for computerized cognitive training in cross-cultural low-resource settings.

  13. Lactate response to different volume patterns of power clean.

    PubMed

    Date, Anand S; Simonson, Shawn R; Ransdell, Lynda B; Gao, Yong

    2013-03-01

    The ability to metabolize or tolerate lactate and produce power simultaneously can be an important determinant of performance. Current training practices for improving lactate use include high-intensity aerobic activities or a combination of aerobic and resistance training. Excessive aerobic training may have undesired physiological adaptations (e.g., muscle loss, change in fiber types). The role of explosive power training in lactate production and use needs further clarification. We hypothesized that high-volume explosive power movements such as Olympic lifts can increase lactate production and overload lactate clearance. Hence, the purpose of this study was to assess lactate accumulation after the completion of 3 different volume patterns of power cleans. Ten male recreational athletes (age 24.22 ± 1.39 years) volunteered. Volume patterns consisted of 3 sets × 3 repetition maximum (3RM) (low volume [LV]), 3 sets × 6 reps at 80-85% of 3RM (midvolume [MV]), and 3 sets × 9 reps at 70-75% of 3RM (high volume [HV]). Rest period was identical at 2 minutes. Blood samples were collected immediately before and after each volume pattern. The HV resulted in the greatest lactate accumulation (7.43 ± 2.94 mmol·L) vs. (5.27 ± 2.48 and 4.03 ± 1.78 mmol·L in MV and LV, respectively). Mean relative increase in lactate was the highest in HV (356.34%). The findings indicate that lactate production in power cleans is largely associated with volume, determined by number of repetitions, load, and rest interval. High-volume explosive training may impose greater metabolic demands than low-volume explosive training and may improve ability to produce power in the presence of lactate. The role of explosive power training in overloading the lactate clearance mechanism should be examined further, especially for athletes of intermittent sport.

  14. Resistance training interventions across the cancer control continuum: a systematic review of the implementation of resistance training principles.

    PubMed

    Fairman, C M; Hyde, P N; Focht, B C

    2017-04-01

    The primary purpose of this systematic review is to examine the extant resistance training (RT) cancer research to evaluate the proportion of RT interventions that: (1) implemented key RT training principles (specificity, progression, overload) and (2) explicitly reported relevant RT prescription components (frequency, intensity, sets, reps). A qualitative systematic review was performed by two reviewers (CMF and PNH) who inspected the titles and abstracts to determine eligibility for this systematic review. Identified papers were obtained in full and further reviewed. Data were extracted to evaluate the application of principles of training, along with specific RT components. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, PEDro, PsychInfo, Cancer Lit, Sport Discus, AMED, Cochrane Central Register of Controlled Trials) and reference lists of included articles from inception to May 2016. 37 studies were included. The principle of specificity was used appropriately in all of the studies, progression in 65% and overload in 76% of the studies. The most common exercise prescription (∼50%) implemented in the studies included in this review were 2-3 days/week, focusing on large muscle groups, 60-70% 1 repetition maximum (RM), 1-3 sets of 8-12 repetitions. Reporting of RT principles in an oncology setting varies greatly, with often vague or non-existent references to the principles of training and how the RT prescription was designed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Using automatic alignment to analyze endangered language data: Testing the viability of untrained alignment

    PubMed Central

    DiCanio, Christian; Nam, Hosung; Whalen, Douglas H.; Timothy Bunnell, H.; Amith, Jonathan D.; García, Rey Castillo

    2013-01-01

    While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones. PMID:23967953

  16. Using automatic alignment to analyze endangered language data: testing the viability of untrained alignment.

    PubMed

    DiCanio, Christian; Nam, Hosung; Whalen, Douglas H; Bunnell, H Timothy; Amith, Jonathan D; García, Rey Castillo

    2013-09-01

    While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones.

  17. Outcomes of three part-time faculty development fellowship programs.

    PubMed

    Anderson, W A; Stritter, F T; Mygdal, W K; Arndt, J E; Reid, A

    1997-03-01

    Part-time faculty development fellowship programs have trained large numbers of new physician faculty for family medicine education programs. This study reviews data from three part-time fellowship programs to determine how well the programs train new faculty and the academic success of fellowship graduates. Part-time fellowship programs at Michigan State University, the University of North Carolina, and the Faculty Development Center in Waco, Tex, sent written surveys to graduates as part of routine follow-up studies. Graduates were asked to report their current status in academic medicine, how they spend their time, measures of academic productivity, and assessments of how well their training prepared them for their current academic positions. Data were complied at each institution and sent to Michigan State University for analysis. The majority of graduates (76%) have remained in their academic positions, and half (49%) teach in medically underserved settings. Graduates report high levels of satisfaction with the training they received. Thirty-two percent of graduates have published peer-reviewed articles, and almost 50% have presented at peer-reviewed meetings. Part-time fellowship programs have been successful at training and retaining large numbers of new faculty for family medicine.

  18. Accelerated Training for Large Feedforward Neural Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.

  19. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  20. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    NASA Astrophysics Data System (ADS)

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-01

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.

  1. Building Capacity for Complementary and Integrative Medicine Through a Large, Cross-Agency, Acupuncture Training Program: Lessons Learned from a Military Health System and Veterans Health Administration Joint Initiative Project.

    PubMed

    Niemtzow, Richard; Baxter, John; Gallagher, Rollin M; Pock, Arnyce; Calabria, Kathryn; Drake, David; Galloway, Kevin; Walter, Joan; Petri, Richard; Piazza, Thomas; Burns, Stephen; Hofmann, Lew; Biery, John; Buckenmaier, Chester

    2018-03-26

    Complementary and integrative medicine (CIM) use in the USA continues to expand, including within the Military Health System (MHS) and Veterans Health Administration (VHA). To mitigate the opioid crisis and provide additional non-pharmacological pain management options, a large cross-agency collaborative project sought to develop and implement a systems-wide curriculum, entitled Acupuncture Training Across Clinical Settings (ATACS). ATACS curriculum content and structure were created and refined over the course of the project in response to consultations with Subject Matter Experts and provider feedback. Course content was developed to be applicable to the MHS and VHA environments and training was open to many types of providers. Training included a 4-hr didactic and "hands on" clinical training program focused on a single auricular acupuncture protocol, Battlefield Acupuncture. Trainee learning and skills proficiency were evaluated by trainer-observation and written examination. Immediately following training, providers completed an evaluation survey on their ATACS experience. One month later, they were asked to complete another survey regarding their auricular acupuncture use and barriers to use. The present evaluation describes the ATACS curriculum, faculty and trainee characteristics, as well as trainee and program developer perspectives. Over the course of a 19-mo period, 2,712 providers completed the in-person, 4-hr didactic and hands-on clinical training session. Due to the increasing requests for training, additional ATACS faculty were trained. Overall, 113 providers were approved to be training faculty. Responses from the trainee surveys indicated high satisfaction with the ATACS training program and illuminated several challenges to using auricular acupuncture with patients. The most common reported barrier to using auricular acupuncture was the lack of obtaining privileges to administer auricular acupuncture within clinical practice. The ATACS program provided a foundational template to increase CIM across the MHS and VHA. The lessons learned in the program's implementation will aid future CIM training programs and improve program evaluations. Future work is needed to determine the most efficient means of improving CIM credentialing and privileging procedures, standardizing and adopting uniform CIM EHR codes and documentation, and examining the effectiveness of CIM techniques in real-world settings.

  2. Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks.

    PubMed

    Kim, D H; MacKinnon, T

    2018-05-01

    To identify the extent to which transfer learning from deep convolutional neural networks (CNNs), pre-trained on non-medical images, can be used for automated fracture detection on plain radiographs. The top layer of the Inception v3 network was re-trained using lateral wrist radiographs to produce a model for the classification of new studies as either "fracture" or "no fracture". The model was trained on a total of 11,112 images, after an eightfold data augmentation technique, from an initial set of 1,389 radiographs (695 "fracture" and 694 "no fracture"). The training data set was split 80:10:10 into training, validation, and test groups, respectively. An additional 100 wrist radiographs, comprising 50 "fracture" and 50 "no fracture" images, were used for final testing and statistical analysis. The area under the receiver operator characteristic curve (AUC) for this test was 0.954. Setting the diagnostic cut-off at a threshold designed to maximise both sensitivity and specificity resulted in values of 0.9 and 0.88, respectively. The AUC scores for this test were comparable to state-of-the-art providing proof of concept for transfer learning from CNNs in fracture detection on plain radiographs. This was achieved using only a moderate sample size. This technique is largely transferable, and therefore, has many potential applications in medical imaging, which may lead to significant improvements in workflow productivity and in clinical risk reduction. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  3. Feasibility of task-specific brain-machine interface training for upper-extremity paralysis in patients with chronic hemiparetic stroke.

    PubMed

    Nishimoto, Atsuko; Kawakami, Michiyuki; Fujiwara, Toshiyuki; Hiramoto, Miho; Honaga, Kaoru; Abe, Kaoru; Mizuno, Katsuhiro; Ushiba, Junichi; Liu, Meigen

    2018-01-10

    Brain-machine interface training was developed for upper-extremity rehabilitation for patients with severe hemiparesis. Its clinical application, however, has been limited because of its lack of feasibility in real-world rehabilitation settings. We developed a new compact task-specific brain-machine interface system that enables task-specific training, including reach-and-grasp tasks, and studied its clinical feasibility and effectiveness for upper-extremity motor paralysis in patients with stroke. Prospective beforeâ€"after study. Twenty-six patients with severe chronic hemiparetic stroke. Participants were trained with the brain-machine interface system to pick up and release pegs during 40-min sessions and 40 min of standard occupational therapy per day for 10 days. Fugl-Meyer upper-extremity motor (FMA) and Motor Activity Log-14 amount of use (MAL-AOU) scores were assessed before and after the intervention. To test its feasibility, 4 occupational therapists who operated the system for the first time assessed it with the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST) 2.0. FMA and MAL-AOU scores improved significantly after brain-machine interface training, with the effect sizes being medium and large, respectively (p<0.01, d=0.55; p<0.01, d=0.88). QUEST effectiveness and safety scores showed feasibility and satisfaction in the clinical setting. Our newly developed compact brain-machine interface system is feasible for use in real-world clinical settings.

  4. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    NASA Astrophysics Data System (ADS)

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and effectively handle different types of input information to perform large-scale geostatistical modelling.

  5. Utility of the Conconi's heart rate deflection to monitor the intensity of aerobic training.

    PubMed

    Passelergue, Philippe A; Cormery, Bruno; Lac, Gérard; Léger, Luc A

    2006-02-01

    The Conconi's heart-rate deflection point (HRd) in the heart rate (HR)/speed curve is often used to set aerobic training loads. Training could either be set in percentage running speed or HR at HRd. In order to establish the limits and usefulness of various aerobic-training modalities for intermediate athletic level (physical-education students), acute responses were analyzed while running for a typical 40-minute training session. Speed, HR, lactate, and cortisol were thus recorded during training at 90 and 100% of running speed (RS: n = 14) and HR (HR: n = 16) at HRd (90% running speed [RS90], 100% running speed [RS100], 90% HR [HR90], and 100% HR [HR100]). During constant HR training, RS decreases while HR drifts upward during constant RS training. Half of the subjects can not finish the 40-minute RS100 session. For HR90, RS90, HR100, and RS100, average intensities are 67, 69, 74.9, and 77% maximal aerobic speed (multistage test), respectively. This study indicates that (1) training at HR100 and RS100 is more appropriate to improve high-intensity metabolic capacities (increased cortisol and lactate) while RS100 is too difficult to be maintained for 40 minutes for subjects at that level at least, (2) training at HR90, however, is better to improve endurance and capacity to do a large amount of work considering cortisol and lactate homeostasis, and (3) training at a constant HR using a HR monitor is a good method to control the intensity of the training with subjects not used to pacing themselves with the split-time approach.

  6. Limited Effects of Set Shifting Training in Healthy Older Adults

    PubMed Central

    Grönholm-Nyman, Petra; Soveri, Anna; Rinne, Juha O.; Ek, Emilia; Nyholm, Alexandra; Stigsdotter Neely, Anna; Laine, Matti

    2017-01-01

    Our ability to flexibly shift between tasks or task sets declines in older age. As this decline may have adverse effects on everyday life of elderly people, it is of interest to study whether set shifting ability can be trained, and if training effects generalize to other cognitive tasks. Here, we report a randomized controlled trial where healthy older adults trained set shifting with three different set shifting tasks. The training group (n = 17) performed adaptive set shifting training for 5 weeks with three training sessions a week (45 min/session), while the active control group (n = 16) played three different computer games for the same period. Both groups underwent extensive pre- and post-testing and a 1-year follow-up. Compared to the controls, the training group showed significant improvements on the trained tasks. Evidence for near transfer in the training group was very limited, as it was seen only on overall accuracy on an untrained computerized set shifting task. No far transfer to other cognitive functions was observed. One year later, the training group was still better on the trained tasks but the single near transfer effect had vanished. The results suggest that computerized set shifting training in the elderly shows long-lasting effects on the trained tasks but very little benefit in terms of generalization. PMID:28386226

  7. Effects of strongman training on salivary testosterone levels in a sample of trained men.

    PubMed

    Ghigiarelli, Jamie J; Sell, Katie M; Raddock, Jessica M; Taveras, Kurt

    2013-03-01

    Strongman exercises consist of multi-joint movements that incorporate large muscle mass groups and impose a substantial amount of neuromuscular stress. The purpose of this study was to examine salivary testosterone responses from 2 novel strongman training (ST) protocols in comparison with an established hypertrophic (H) protocol reported to acutely elevate testosterone levels. Sixteen men (24 ± 4.4 years, 181.2 ± 6.8 cm, and 95.3 ± 20.3 kg) volunteered to participate in this study. Subjects completed 3 protocols designed to ensure equal total volume (sets and repetitions), rest period, and intensity between the groups. Exercise sets were performed to failure. Exercise selection and intensity (3 sets × 10 repetitions at 75% 1 repetition maximum) were chosen as they reflected commonly prescribed resistance exercise protocols recognized to elicit a large acute hormonal response. In each of the protocols, subjects were required to perform 3 sets to muscle failure of 5 different exercises (tire flip, chain drag, farmers walk, keg carry, and atlas stone lift) with a 2-minute rest interval between sets and a 3-minute rest interval between exercises. Saliva samples were collected pre-exercise (PRE), immediate postexercise (PST), and 30 minutes postexercise (30PST). Delta scores indicated a significant difference between PRE and PST testosterone level within each group (p ≤ 0.05), with no significant difference between the groups. Testosterone levels spiked 136% (225.23 ± 148.01 pg·ml(-1)) for the H group, 74% (132.04 ± 98.09 pg·ml(-1)) for the ST group, and 54% (122.10 ± 140.67 pg·ml) for the mixed strongman/hypertrophy (XST) group. A significant difference for testosterone level occurred over time (PST to 30PST) for the H group p ≤ 0.05. In conclusion, ST elicits an acute endocrine response similar to a recognized H protocol when equated for duration and exercise intensity.

  8. Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings.

    PubMed

    Reedy, Gabriel B; Lavelle, Mary; Simpson, Thomas; Anderson, Janet E

    2017-10-01

    A central feature of clinical simulation training is human factors skills, providing staff with the social and cognitive skills to cope with demanding clinical situations. Although these skills are critical to safe patient care, assessing their learning is challenging. This study aimed to develop, pilot and evaluate a valid and reliable structured instrument to assess human factors skills, which can be used pre- and post-simulation training, and is relevant across a range of healthcare professions. Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees. Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees ( t (98)=4.88, p<0.001) and was sensitive to change following training in acute and mental health settings, across professional groups (p<0.001). Confirmatory factor analysis revealed an adequate model fit (RMSEA=0.066). The Human Factors Skills for Healthcare Instrument provides a reliable and valid method of assessing trainees' human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.

  9. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    PubMed

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p < 0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  10. The effects of an office ergonomics training and chair intervention on worker knowledge, behavior and musculoskeletal risk.

    PubMed

    Robertson, Michelle; Amick, Benjamin C; DeRango, Kelly; Rooney, Ted; Bazzani, Lianna; Harrist, Ron; Moore, Anne

    2009-01-01

    A large-scale field intervention study was undertaken to examine the effects of office ergonomics training coupled with a highly adjustable chair on office workers' knowledge and musculoskeletal risks. Office workers were assigned to one of three study groups: a group receiving the training and adjustable chair (n=96), a training-only group (n=63), and a control group (n=57). The office ergonomics training program was created using an instructional systems design model. A pre/post-training knowledge test was administered to all those who attended the training. Body postures and workstation set-ups were observed before and after the intervention. Perceived control over the physical work environment was higher for both intervention groups as compared to workers in the control group. A significant increase in overall ergonomic knowledge was observed for the intervention groups. Both intervention groups exhibited higher level behavioral translation and had lower musculoskeletal risk than the control group.

  11. Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes.

    PubMed

    Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian

    2018-04-18

    Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Communication skills for extended duties dental nurses: the childsmile perspective.

    PubMed

    O'Keefe, Emma

    2015-02-01

    Good communication and influencing skills are key competency areas for dental nurses and are highly relevant when working with children and their families/carers in Childsmile, a national oral health improvement programme for children in Scotland. The General Dental Council (GDC) identifies communication skills as one of the nine principles for registrants; a large number of complaints seen by the GDC relate to allegations around communication and patient expectations not being fully met. Much time and investment has been spent in researching the role of the Extended Duties Dental Nurse (EDDN) and ensuring appropriate training is provided. While there is specific training for EDDNs delivering the Childsmile programme, the programme appreciates that good communication skills are a core component of all training programmes for dental nurses. This paper sets out to explore the role of EDDNs in Childsmile and specifically looks at the importance of good communication skills and how it facilitates and impacts on the delivery of the Childsmile programme in a variety of settings.

  13. Sleep restores loss of generalized but not rote learning of synthetic speech.

    PubMed

    Fenn, Kimberly M; Margoliash, Daniel; Nusbaum, Howard C

    2013-09-01

    Sleep-dependent consolidation has been demonstrated for declarative and procedural memory but few theories of consolidation distinguish between rote and generalized learning, suggesting similar consolidation should occur for both. However, studies using rote and generalized learning have suggested different patterns of consolidation may occur, although different tasks have been used across studies. Here we directly compared consolidation of rote and generalized learning using a single speech identification task. Training on a large set of novel stimuli resulted in substantial generalized learning, and sleep restored performance that had degraded after 12 waking hours. Training on a small set of repeated stimuli primarily resulted in rote learning and performance also degraded after 12 waking hours but was not restored by sleep. Moreover performance was significantly worse 24-h after rote training. Our results suggest a functional dissociation between the mechanisms of consolidation for rote and generalized learning which has broad implications for memory models. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. A real world dissemination and implementation of Transdiagnostic Behavior Therapy (TBT) for veterans with affective disorders.

    PubMed

    Gros, Daniel F; Szafranski, Derek D; Shead, Sarah D

    2017-03-01

    Dissemination and implementation of evidence-based psychotherapies is challenging in real world clinical settings. Transdiagnostic Behavior Therapy (TBT) for affective disorders was developed with dissemination and implementation in clinical settings in mind. The present study investigated a voluntary local dissemination and implementation effort, involving 28 providers participating in a four-hour training on TBT. Providers completed immediate (n=22) and six-month follow-up (n=12) training assessments and were encouraged to collect data on their TBT patients (delivery fidelity was not investigated). Findings demonstrated that providers endorsed learning of and interest in using TBT after the training. At six-months, 50% of providers reported using TBT with their patients and their perceived effectiveness of TBT to be very good to excellent. Submitted patient outcome data evidenced medium to large effect sizes. Together, these findings provide preliminary support for the effectiveness of a real world dissemination and implementation of TBT. Published by Elsevier Ltd.

  15. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations

    NASA Astrophysics Data System (ADS)

    Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    2017-07-01

    While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.

  16. Evaluation of CNN as anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  17. Effects of Goal Setting on Performance and Job Satisfaction

    ERIC Educational Resources Information Center

    Ivancevich, John M.

    1976-01-01

    Studied the effect of goal-setting training on the performance and job satisfaction of sales personnel. One group was trained in participative goal setting; one group was trained in assigned goal setting; and one group received no training. Both trained groups showed temporary improvements in performance and job satisfaction. For availability see…

  18. Inadequacy and Indebtedness

    PubMed Central

    Geistwhite, Robert

    2000-01-01

    The nature of the fee arrangement has significant influence on the psychotherapeutic process even when there is no fee. Given the large number of psychiatrists who receive at least some part of their training in the public system, understanding the no-fee arrangement is vital to the psychodynamic training of future psychiatrists. Following a brief overview of the meaning of money and the fee arrangement, various scenarios are considered under the headings of “inadequacy” and “indebtedness.” Although similar dynamics may be present in other public and private settings, attention is given to the county training program, with the intent to assist psychiatry residents and supervisors in their awareness and understanding of the psychodynamics of psychotherapy without fee. PMID:10896739

  19. A hybrid linear/nonlinear training algorithm for feedforward neural networks.

    PubMed

    McLoone, S; Brown, M D; Irwin, G; Lightbody, A

    1998-01-01

    This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

  20. Whole body vibration training--improving balance control and muscle endurance.

    PubMed

    Ritzmann, Ramona; Kramer, Andreas; Bernhardt, Sascha; Gollhofer, Albert

    2014-01-01

    Exercise combined with whole body vibration (WBV) is becoming increasingly popular, although additional effects of WBV in comparison to conventional exercises are still discussed controversially in literature. Heterogeneous findings are attributed to large differences in the training designs between WBV and "control" groups in regard to training volume, load and type. In order to separate the additional effects of WBV from the overall adaptations due to the intervention, in this study, a four-week WBV training setup was compared to a matched intervention program with identical training parameters in both training settings except for the exposure to WBV. In a repeated-measures matched-subject design, 38 participants were assigned to either the WBV group (VIB) or the equivalent training group (CON). Training duration, number of sets, rest periods and task-specific instructions were matched between the groups. Balance, jump height and local static muscle endurance were assessed before and after the training period. The statistical analysis revealed significant interaction effects of group×time for balance and local static muscle endurance (p<0.05). Hence, WBV caused an additional effect on balance control (pre vs. post VIB +13%, p<0.05 and CON +6%, p = 0.33) and local static muscle endurance (pre vs. post VIB +36%, p<0.05 and CON +11%, p = 0.49). The effect on jump height remained insignificant (pre vs. post VIB +3%, p = 0.25 and CON ±0%, p = 0.82). This study provides evidence for the additional effects of WBV above conventional exercise alone. As far as balance and muscle endurance of the lower leg are concerned, a training program that includes WBV can provide supplementary benefits in young and well-trained adults compared to an equivalent program that does not include WBV.

  1. Whole Body Vibration Training - Improving Balance Control and Muscle Endurance

    PubMed Central

    Ritzmann, Ramona; Kramer, Andreas; Bernhardt, Sascha; Gollhofer, Albert

    2014-01-01

    Exercise combined with whole body vibration (WBV) is becoming increasingly popular, although additional effects of WBV in comparison to conventional exercises are still discussed controversially in literature. Heterogeneous findings are attributed to large differences in the training designs between WBV and “control” groups in regard to training volume, load and type. In order to separate the additional effects of WBV from the overall adaptations due to the intervention, in this study, a four-week WBV training setup was compared to a matched intervention program with identical training parameters in both training settings except for the exposure to WBV. In a repeated-measures matched-subject design, 38 participants were assigned to either the WBV group (VIB) or the equivalent training group (CON). Training duration, number of sets, rest periods and task-specific instructions were matched between the groups. Balance, jump height and local static muscle endurance were assessed before and after the training period. The statistical analysis revealed significant interaction effects of group×time for balance and local static muscle endurance (p<0.05). Hence, WBV caused an additional effect on balance control (pre vs. post VIB +13%, p<0.05 and CON +6%, p = 0.33) and local static muscle endurance (pre vs. post VIB +36%, p<0.05 and CON +11%, p = 0.49). The effect on jump height remained insignificant (pre vs. post VIB +3%, p = 0.25 and CON ±0%, p = 0.82). This study provides evidence for the additional effects of WBV above conventional exercise alone. As far as balance and muscle endurance of the lower leg are concerned, a training program that includes WBV can provide supplementary benefits in young and well-trained adults compared to an equivalent program that does not include WBV. PMID:24587114

  2. Video Self-Modeling: A Job Skills Intervention with Individuals with Intellectual Disabilities in Employment Settings

    ERIC Educational Resources Information Center

    Goh, Ailsa E.

    2010-01-01

    A large majority of adults with intellectual disabilities are unemployed. Unemployment of adults with intellectual disabilities is a complex multidimensional issue. Some barriers to employment of individuals with intellectual disabilities are the lack of job experience and skills training. In recent years, video-based interventions, such as video…

  3. TrackTable Trajectory Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew T.

    Tracktable is designed for analysis and rendering of the trajectories of moving objects such as planes, trains, automobiles and ships. Its purpose is to operate on large sets of trajectories (millions) to help a user detect, analyze and display patterns. It will also be used to disseminate trajectory research results from Sandia's PANTHER Grand Challenge LDRD.

  4. Novel Computational Approaches to Drug Discovery

    NASA Astrophysics Data System (ADS)

    Skolnick, Jeffrey; Brylinski, Michal

    2010-01-01

    New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.

  5. Acute Responses to Resistance and High-Intensity Interval Training in Early Adolescents.

    PubMed

    Harris, Nigel K; Dulson, Deborah K; Logan, Greig R M; Warbrick, Isaac B; Merien, Fabrice L R; Lubans, David R

    2017-05-01

    Harris, NK, Dulson, DK, Logan, GRM, Warbrick, IB, Merien, FLR, and Lubans, DR. Acute responses to resistance and high-intensity interval training in early adolescents. J Strength Cond Res 31(5): 1177-1186, 2017-The purpose of this study was to compare the acute physiological responses within and between resistance training (RT) and high-intensity interval training (HIIT) matched for time and with comparable effort, in a school setting. Seventeen early adolescents (12.9 ± 0.3 years) performed both RT (2-5 repetitions perceived short of failure at the end of each set) and HIIT (90% of age-predicted maximum heart rate), equated for total work set and recovery period durations comprising of 12 "sets" of 30-second work followed by 30-second recovery (total session time 12 minutes). Variables of interest included oxygen consumption, set and session heart rate (HR), and rate of perceived exertion, and change in salivary cortisol (SC), salivary alpha amylase, and blood lactate (BL) from presession to postsession. Analyses were conducted to determine responses within and between the 2 different protocols. For both RT and HIIT, there were very large increases pretrial to posttrial for SC and BL, and only BL increased greater in HIIT (9.1 ± 2.6 mmol·L) than RT (6.8 ± 3.3 mmol·L). Mean set HR for both RT (170 ± 9.1 b·min) and HIIT (179 ± 5.6 b·min) was at least 85% of HRmax. V[Combining Dot Above]O2 over all 12 sets was greater for HIIT (33.8 ± 5.21 ml·kg·min) than RT (24.9 ± 3.23 ml·kg·min). Brief, repetitive, intermittent forays into high but not supramaximal intensity exercise using RT or HIIT seemed to be a potent physiological stimulus in adolescents.

  6. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    NASA Astrophysics Data System (ADS)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  7. Large scale analysis of protein-binding cavities using self-organizing maps and wavelet-based surface patches to describe functional properties, selectivity discrimination, and putative cross-reactivity.

    PubMed

    Kupas, Katrin; Ultsch, Alfred; Klebe, Gerhard

    2008-05-15

    A new method to discover similar substructures in protein binding pockets, independently of sequence and folding patterns or secondary structure elements, is introduced. The solvent-accessible surface of a binding pocket, automatically detected as a depression on the protein surface, is divided into a set of surface patches. Each surface patch is characterized by its shape as well as by its physicochemical characteristics. Wavelets defined on surfaces are used for the description of the shape, as they have the great advantage of allowing a comparison at different resolutions. The number of coefficients to describe the wavelets can be chosen with respect to the size of the considered data set. The physicochemical characteristics of the patches are described by the assignment of the exposed amino acid residues to one or more of five different properties determinant for molecular recognition. A self-organizing neural network is used to project the high-dimensional feature vectors onto a two-dimensional layer of neurons, called a map. To find similarities between the binding pockets, in both geometrical and physicochemical features, a clustering of the projected feature vector is performed using an automatic distance- and density-based clustering algorithm. The method was validated with a small training data set of 109 binding cavities originating from a set of enzymes covering 12 different EC numbers. A second test data set of 1378 binding cavities, extracted from enzymes of 13 different EC numbers, was then used to prove the discriminating power of the algorithm and to demonstrate its applicability to large scale analyses. In all cases, members of the data set with the same EC number were placed into coherent regions on the map, with small distances between them. Different EC numbers are separated by large distances between the feature vectors. A third data set comprising three subfamilies of endopeptidases is used to demonstrate the ability of the algorithm to detect similar substructures between functionally related active sites. The algorithm can also be used to predict the function of novel proteins not considered in training data set. 2007 Wiley-Liss, Inc.

  8. Identification of Technology Terms in Patents (Open Access, Published Version)

    DTIC Science & Technology

    2014-05-31

    large set of human anno - tated examples of the target class(es) along with their tex- tual contexts to serve as training examples for generating a machine...perform the equiva- lent function in German and Chinese. 2.2. Manual annotation of terms Supervised learning requires a gold set of manually anno ...Npr, prev Jpr, prev J ). These were intended to capture, for ex- ample, the verb (and any prepositions/articles) for which the term is the object. prev

  9. An Oracle-based co-training framework for writer identification in offline handwriting

    NASA Astrophysics Data System (ADS)

    Porwal, Utkarsh; Rajan, Sreeranga; Govindaraju, Venu

    2012-01-01

    State-of-the-art techniques for writer identification have been centered primarily on enhancing the performance of the system for writer identification. Machine learning algorithms have been used extensively to improve the accuracy of such system assuming sufficient amount of data is available for training. Little attention has been paid to the prospect of harnessing the information tapped in a large amount of un-annotated data. This paper focuses on co-training based framework that can be used for iterative labeling of the unlabeled data set exploiting the independence between the multiple views (features) of the data. This paradigm relaxes the assumption of sufficiency of the data available and tries to generate labeled data from unlabeled data set along with improving the accuracy of the system. However, performance of co-training based framework is dependent on the effectiveness of the algorithm used for the selection of data points to be added in the labeled set. We propose an Oracle based approach for data selection that learns the patterns in the score distribution of classes for labeled data points and then predicts the labels (writers) of the unlabeled data point. This method for selection statistically learns the class distribution and predicts the most probable class unlike traditional selection algorithms which were based on heuristic approaches. We conducted experiments on publicly available IAM dataset and illustrate the efficacy of the proposed approach.

  10. Computational Short-cutting the Big Data Classification Bottleneck: Using the MODIS Land Cover Product to Derive a Consistent 30 m Landsat Land Cover Product of the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Roy, D. P.

    2016-12-01

    Classification is a fundamental process in remote sensing used to relate pixel values to land cover classes present on the surface. The state of the practice for large area land cover classification is to classify satellite time series metrics with a supervised (i.e., training data dependent) non-parametric classifier. Classification accuracy generally increases with training set size. However, training data collection is expensive and the optimal training distribution over large areas is unknown. The MODIS 500 m land cover product is available globally on an annual basis and so provides a potentially very large source of land cover training data. A novel methodology to classify large volume Landsat data using high quality training data derived automatically from the MODIS land cover product is demonstrated for all of the Conterminous United States (CONUS). The known misclassification accuracy of the MODIS land cover product and the scale difference between the 500 m MODIS and 30 m Landsat data are accommodated for by a novel MODIS product filtering, Landsat pixel selection, and iterative training approach to balance the proportion of local and CONUS training data used. Three years of global Web-enabled Landsat data (WELD) data for all of the CONUS are classified using a random forest classifier and the results assessed using random forest `out-of-bag' training samples. The global WELD data are corrected to surface nadir BRDF-Adjusted Reflectance and are defined in 158 × 158 km tiles in the same projection and nested to the MODIS land cover products. This reduces the need to pre-process the considerable Landsat data volume (more than 14,000 Landsat 5 and 7 scenes per year over the CONUS covering 11,000 million 30 m pixels). The methodology is implemented in a parallel manner on WELD tile by tile basis but provides a wall-to-wall seamless 30 m land cover product. Detailed tile and CONUS results are presented and the potential for global production using the recently available global WELD products are discussed.

  11. An ensemble heterogeneous classification methodology for discovering health-related knowledge in social media messages.

    PubMed

    Tuarob, Suppawong; Tucker, Conrad S; Salathe, Marcel; Ram, Nilam

    2014-06-01

    The role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data. Social media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported. Three data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts. Two sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media. The small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Active learning for solving the incomplete data problem in facial age classification by the furthest nearest-neighbor criterion.

    PubMed

    Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun

    2011-07-01

    Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.

  13. Evaluation of mental health first aid training in a diverse community setting.

    PubMed

    Morawska, Alina; Fletcher, Renee; Pope, Susan; Heathwood, Ellen; Anderson, Emily; McAuliffe, Christine

    2013-02-01

    Mental health first aid (MHFA) training has been disseminated in the community and has yielded positive outcomes in terms of increasing help-seeking behaviour and mental health literacy. However, there has been limited research investigating the effectiveness of this programme in multicultural communities. Given the increasing levels of multiculturalism in many countries, as well as the large number of barriers presented to these groups when trying to seek help for mental illnesses, the present study aimed to investigate the effectiveness of MHFA in these settings. A total of 458 participants, who were recruited from multicultural organizations, participated in a series of MHFA training courses. Participants completed questionnaires pre and post the training course, and 6-month follow-up interviews were conducted with a subsample of participants. Findings suggested that MHFA training increased participant recognition of mental illnesses, concordance with primary care physicians about treatments, confidence in providing first aid, actual help provided to others, and a reduction in stigmatizing attitudes. A 6-month follow up also yielded positive long-term effects of MHFA. The results have implications for further dissemination and the use of MHFA in diverse communities. In addition, the results highlight the need for mental health training in health-care service providers. © 2012 The Authors. International Journal of Mental Health Nursing © 2012 Australian College of Mental Health Nurses Inc.

  14. Deep learning-based fine-grained car make/model classification for visual surveillance

    NASA Astrophysics Data System (ADS)

    Gundogdu, Erhan; Parıldı, Enes Sinan; Solmaz, Berkan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.

  15. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    PubMed

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Examining the Environmental Effects of Athletic Training: Perceptions of Waste and the Use of Green Techniques.

    PubMed

    Potteiger, Kelly; Pitney, William A; Cappaert, Thomas A; Wolfe, Angela

    2017-12-01

      Environmental sustainability is a critical concern in health care. Similar to other professions, the practice of athletic training necessitates the use of a large quantity of natural and manufactured resources.   To examine the perceptions of the waste produced by the practice of athletic training and the green practices currently used by athletic trainers (ATs) to combat this waste.   Mixed-methods study.   Field setting.   A total of 442 ATs completed the study. Sixteen individuals participated in the qualitative portion.   Data from sections 2 and 3 of the Athletic Training Environmental Impact Survey were analyzed. Focus groups and individual interviews were used to determine participants' views of waste and the efforts used to combat waste. Descriptive statistics were used to examine types of waste. Independent t tests, χ 2 tests, and 1-way analyses of variance were calculated to identify any differences between the knowledge and use of green techniques. Interviews and focus groups were transcribed verbatim and analyzed inductively.   Participants reported moderate knowledge of green techniques (3.18 ± 0.53 on a 5-point Likert scale). Fifty-eight percent (n = 260) of survey participants perceived that a substantial amount of waste was produced by the practice of athletic training. Ninety-two percent (n = 408) admitted they thought about the waste produced in their daily practice. The types of waste reported most frequently were plastics (n = 111, 29%), water (n = 88, 23%), and paper for administrative use (n = 81, 21%). Fifty-two percent (n = 234) agreed this waste directly affected the environment. The qualitative aspect of the study reinforced recognition of the large amount of waste produced by the practice of athletic training. Types of conservation practices used by ATs were also explored.   Participants reported concern regarding the waste produced by athletic training. The amount of waste varies depending on practice size and setting. Future researchers should use direct measures to determine the amount of waste created by the practice of athletic training.

  17. Using Sentiment Analysis to Observe How Science is Communicated

    NASA Astrophysics Data System (ADS)

    Topping, David; Illingworth, Sam

    2016-04-01

    'Citizen Science' and 'Big data' are terms that are currently ubiquitous in the field of science communication. Whilst opinions differ as to what exactly constitutes a 'citizen', and how much information is needed in order for a data set to be considered truly 'big', what is apparent is that both of these fields have the potential to help revolutionise not just the way that science is communicated, but also the way that it is conducted. However, both the generation of sufficient data, and the efficiency of then analysing the data once it has been analysed need to be taken into account. Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral. The process of sentiment analysis can be automated, providing that an adequate training set has been used, and that the nuances that are associated with a particular topic have been accounted for. Given the large amounts of data that are generated by social media posts, and the often-opinionated nature of these posts, they present an ideal source of data to both train with and then scrutinize using sentiment analysis. In this work we will demonstrate how sentiment analysis can be used to examine a large number of Twitter posts, and how a training set can be established to ensure consistency and accuracy in the automation. Following an explanation of the process, we will demonstrate how automated sentiment analysis can be used to categorise opinions in relation to a large-scale science festival, and will discuss if sentiment analysis can be used to tell us if there is a bias in these communications. We will also investigate if sentiment analysis can be used to replace more traditional, and invasive evaluation strategies, and how this approach can then be adopted to investigate other topics, both within scientific communication and in the wider scientific context.

  18. Exploring provision of Innovative Community Education Placements (ICEPs) for junior doctors in training: a qualitative study.

    PubMed

    Griffin, Ann; Jones, Melvyn M; Khan, Nada; Park, Sophie; Rosenthal, Joe; Chrysikou, Vasiliki

    2016-02-09

    Medical education in community settings is an essential ingredient of doctors' training and a key factor in recruiting general practitioners (GP). Health Education England's report 'Broadening the Foundation' recommends foundation doctors complete 4-month community placements. While Foundation GP schemes exist; other community settings, are not yet used for postgraduate training. The objective of this study was to explore how community-based training of junior doctors might be expanded into possible 'innovative community education placements' (ICEPs), examining opportunities and barriers to these developments. A qualitative study where semistructured interviews were undertaken and themes were generated deductively from the research questions, and iteratively from transcripts. UK community healthcare. Stakeholders from UK Community healthcare providers and undergraduate GP and community educators. Nine participants were interviewed; those experienced in delivering community-based undergraduate education, and others working in community settings that had not previously trained doctors. Themes identified were practicalities such as 'finance and governance', 'communication and interaction', 'delivery of training' and 'perceptions of community'. ICEPs were willing to train Foundation doctors. However, concerns were raised that large numbers and inadequate resources could undermine the quality of educational opportunities, and even cause reputational damage. Organisation was seen as a challenge, which might be best met by placing some responsibility with trainees to manage their placements. ICEP providers agreed that defined service contribution by trainees was required to make placements sustainable, and enhance learning. ICEPs stated the need for positive articulation of the learning value of placements to learners and stakeholders. This study highlighted the opportunities for foundation doctors to gain specialist and generalist knowledge in ICEPs from diverse clinical teams and patients. We recommend in conclusion ways of dealing with some of the perceived barriers to training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Fourier spatial frequency analysis for image classification: training the training set

    NASA Astrophysics Data System (ADS)

    Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart

    2016-04-01

    The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.

  20. Deep learning for galaxy surface brightness profile fitting

    NASA Astrophysics Data System (ADS)

    Tuccillo, D.; Huertas-Company, M.; Decencière, E.; Velasco-Forero, S.; Domínguez Sánchez, H.; Dimauro, P.

    2018-03-01

    Numerous ongoing and future large area surveys (e.g. Dark Energy Survey, EUCLID, Large Synoptic Survey Telescope, Wide Field Infrared Survey Telescope) will increase by several orders of magnitude the volume of data that can be exploited for galaxy morphology studies. The full potential of these surveys can be unlocked only with the development of automated, fast, and reliable analysis methods. In this paper, we present DeepLeGATo, a new method for 2-D photometric galaxy profile modelling, based on convolutional neural networks. Our code is trained and validated on analytic profiles (HST/CANDELS F160W filter) and it is able to retrieve the full set of parameters of one-component Sérsic models: total magnitude, effective radius, Sérsic index, and axis ratio. We show detailed comparisons between our code and GALFIT. On simulated data, our method is more accurate than GALFIT and ˜3000 time faster on GPU (˜50 times when running on the same CPU). On real data, DeepLeGATo trained on simulations behaves similarly to GALFIT on isolated galaxies. With a fast domain adaptation step made with the 0.1-0.8 per cent the size of the training set, our code is easily capable to reproduce the results obtained with GALFIT even on crowded regions. DeepLeGATo does not require any human intervention beyond the training step, rendering it much automated than traditional profiling methods. The development of this method for more complex models (two-component galaxies, variable point spread function, dense sky regions) could constitute a fundamental tool in the era of big data in astronomy.

  1. What IAPT CBT High-Intensity Trainees Do After Training.

    PubMed

    Liness, Sheena; Lea, Susan; Nestler, Steffen; Parker, Hannah; Clark, David M

    2017-01-01

    The UK Department of Health Improving Access to Psychological Therapies (IAPT) initiative set out to train a large number of therapists in cognitive behaviour therapies (CBT) for depression and anxiety disorders. Little is currently known about the retention of IAPT CBT trainees, or the use of CBT skills acquired on the course in the workplace after training has finished. This study set out to conduct a follow-up survey of past CBT trainees on the IAPT High Intensity CBT Course at the Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King's College London (KCL), one of the largest IAPT High Intensity courses in the UK. Past trainees (n = 212) across 6 cohorts (2008-2014 intakes) were contacted and invited to participate in a follow-up survey. A response rate of 92.5% (n = 196) was achieved. The vast majority of IAPT trainees continue to work in IAPT services posttraining (79%) and to practise CBT as their main therapy modality (94%); 61% have become CBT supervisors. A minority (23%) have progressed to other senior roles in the services. Shortcomings are reported in the use of out-of-office CBT interventions, the use of disorder-specific outcome measures and therapy recordings to inform therapy and supervision. Past trainees stay working in IAPT services and continue to use CBT methods taught on the course. Some NICE recommended treatment procedures that are likely to facilitate patients' recovery are not being routinely implemented across IAPT services. The results have implications for the continued roll out of the IAPT programme, and other future large scale training initiatives.

  2. Effect of training on word-recognition performance in noise for young normal-hearing and older hearing-impaired listeners.

    PubMed

    Burk, Matthew H; Humes, Larry E; Amos, Nathan E; Strauser, Lauren E

    2006-06-01

    The objective of this study was to evaluate the effectiveness of a training program for hearing-impaired listeners to improve their speech-recognition performance within a background noise when listening to amplified speech. Both noise-masked young normal-hearing listeners, used to model the performance of elderly hearing-impaired listeners, and a group of elderly hearing-impaired listeners participated in the study. Of particular interest was whether training on an isolated word list presented by a standardized talker can generalize to everyday speech communication across novel talkers. Word-recognition performance was measured for both young normal-hearing (n = 16) and older hearing-impaired (n = 7) adults. Listeners were trained on a set of 75 monosyllabic words spoken by a single female talker over a 9- to 14-day period. Performance for the familiar (trained) talker was measured before and after training in both open-set and closed-set response conditions. Performance on the trained words of the familiar talker were then compared with those same words spoken by three novel talkers and to performance on a second set of untrained words presented by both the familiar and unfamiliar talkers. The hearing-impaired listeners returned 6 mo after their initial training to examine retention of the trained words as well as their ability to transfer any knowledge gained from word training to sentences containing both trained and untrained words. Both young normal-hearing and older hearing-impaired listeners performed significantly better on the word list in which they were trained versus a second untrained list presented by the same talker. Improvements on the untrained words were small but significant, indicating some generalization to novel words. The large increase in performance on the trained words, however, was maintained across novel talkers, pointing to the listener's greater focus on lexical memorization of the words rather than a focus on talker-specific acoustic characteristics. On return in 6 mo, listeners performed significantly better on the trained words relative to their initial baseline performance. Although the listeners performed significantly better on trained versus untrained words in isolation, once the trained words were embedded in sentences, no improvement in recognition over untrained words within the same sentences was shown. Older hearing-impaired listeners were able to significantly improve their word-recognition abilities through training with one talker and to the same degree as young normal-hearing listeners. The improved performance was maintained across talkers and across time. This might imply that training a listener using a standardized list and talker may still provide benefit when these same words are presented by novel talkers outside the clinic. However, training on isolated words was not sufficient to transfer to fluent speech for the specific sentence materials used within this study. Further investigation is needed regarding approaches to improve a hearing aid user's speech understanding in everyday communication situations.

  3. Efficiency of multi-breed genomic selection for dairy cattle breeds with different sizes of reference population.

    PubMed

    Hozé, C; Fritz, S; Phocas, F; Boichard, D; Ducrocq, V; Croiseau, P

    2014-01-01

    Single-breed genomic selection (GS) based on medium single nucleotide polymorphism (SNP) density (~50,000; 50K) is now routinely implemented in several large cattle breeds. However, building large enough reference populations remains a challenge for many medium or small breeds. The high-density BovineHD BeadChip (HD chip; Illumina Inc., San Diego, CA) containing 777,609 SNP developed in 2010 is characterized by short-distance linkage disequilibrium expected to be maintained across breeds. Therefore, combining reference populations can be envisioned. A population of 1,869 influential ancestors from 3 dairy breeds (Holstein, Montbéliarde, and Normande) was genotyped with the HD chip. Using this sample, 50K genotypes were imputed within breed to high-density genotypes, leading to a large HD reference population. This population was used to develop a multi-breed genomic evaluation. The goal of this paper was to investigate the gain of multi-breed genomic evaluation for a small breed. The advantage of using a large breed (Normande in the present study) to mimic a small breed is the large potential validation population to compare alternative genomic selection approaches more reliably. In the Normande breed, 3 training sets were defined with 1,597, 404, and 198 bulls, and a unique validation set included the 394 youngest bulls. For each training set, estimated breeding values (EBV) were computed using pedigree-based BLUP, single-breed BayesC, or multi-breed BayesC for which the reference population was formed by any of the Normande training data sets and 4,989 Holstein and 1,788 Montbéliarde bulls. Phenotypes were standardized by within-breed genetic standard deviation, the proportion of polygenic variance was set to 30%, and the estimated number of SNP with a nonzero effect was about 7,000. The 2 genomic selection (GS) approaches were performed using either the 50K or HD genotypes. The correlations between EBV and observed daughter yield deviations (DYD) were computed for 6 traits and using the different prediction approaches. Compared with pedigree-based BLUP, the average gain in accuracy with GS in small populations was 0.057 for the single-breed and 0.086 for multi-breed approach. This gain was up to 0.193 and 0.209, respectively, with the large reference population. Improvement of EBV prediction due to the multi-breed evaluation was higher for animals not closely related to the reference population. In the case of a breed with a small reference population size, the increase in correlation due to multi-breed GS was 0.141 for bulls without their sire in reference population compared with 0.016 for bulls with their sire in reference population. These results demonstrate that multi-breed GS can contribute to increase genomic evaluation accuracy in small breeds. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. Comparative evaluation of support vector machine classification for computer aided detection of breast masses in mammography

    NASA Astrophysics Data System (ADS)

    Lesniak, J. M.; Hupse, R.; Blanc, R.; Karssemeijer, N.; Székely, G.

    2012-08-01

    False positive (FP) marks represent an obstacle for effective use of computer-aided detection (CADe) of breast masses in mammography. Typically, the problem can be approached either by developing more discriminative features or by employing different classifier designs. In this paper, the usage of support vector machine (SVM) classification for FP reduction in CADe is investigated, presenting a systematic quantitative evaluation against neural networks, k-nearest neighbor classification, linear discriminant analysis and random forests. A large database of 2516 film mammography examinations and 73 input features was used to train the classifiers and evaluate for their performance on correctly diagnosed exams as well as false negatives. Further, classifier robustness was investigated using varying training data and feature sets as input. The evaluation was based on the mean exam sensitivity in 0.05-1 FPs on normals on the free-response receiver operating characteristic curve (FROC), incorporated into a tenfold cross validation framework. It was found that SVM classification using a Gaussian kernel offered significantly increased detection performance (P = 0.0002) compared to the reference methods. Varying training data and input features, SVMs showed improved exploitation of large feature sets. It is concluded that with the SVM-based CADe a significant reduction of FPs is possible outperforming other state-of-the-art approaches for breast mass CADe.

  5. The Influence of Art Expertise and Training on Emotion and Preference Ratings for Representational and Abstract Artworks

    PubMed Central

    van Paasschen, Jorien; Bacci, Francesca; Melcher, David P.

    2015-01-01

    Across cultures and throughout recorded history, humans have produced visual art. This raises the question of why people report such an emotional response to artworks and find some works more beautiful or compelling than others. In the current study we investigated the interplay between art expertise, and emotional and preference judgments. Sixty participants (40 novices, 20 art experts) rated a set of 150 abstract artworks and portraits during two occasions: in a laboratory setting and in a museum. Before commencing their second session, half of the art novices received a brief training on stylistic and art historical aspects of abstract art and portraiture. Results showed that art experts rated the artworks higher than novices on aesthetic facets (beauty and wanting), but no group differences were observed on affective evaluations (valence and arousal). The training session made a small effect on ratings of preference compared to the non-trained group of novices. Overall, these findings are consistent with the idea that affective components of art appreciation are less driven by expertise and largely consistent across observers, while more cognitive aspects of aesthetic viewing depend on viewer characteristics such as art expertise. PMID:26244368

  6. Parsimonious kernel extreme learning machine in primal via Cholesky factorization.

    PubMed

    Zhao, Yong-Ping

    2016-08-01

    Recently, extreme learning machine (ELM) has become a popular topic in machine learning community. By replacing the so-called ELM feature mappings with the nonlinear mappings induced by kernel functions, two kernel ELMs, i.e., P-KELM and D-KELM, are obtained from primal and dual perspectives, respectively. Unfortunately, both P-KELM and D-KELM possess the dense solutions in direct proportion to the number of training data. To this end, a constructive algorithm for P-KELM (CCP-KELM) is first proposed by virtue of Cholesky factorization, in which the training data incurring the largest reductions on the objective function are recruited as significant vectors. To reduce its training cost further, PCCP-KELM is then obtained with the application of a probabilistic speedup scheme into CCP-KELM. Corresponding to CCP-KELM, a destructive P-KELM (CDP-KELM) is presented using a partial Cholesky factorization strategy, where the training data incurring the smallest reductions on the objective function after their removals are pruned from the current set of significant vectors. Finally, to verify the efficacy and feasibility of the proposed algorithms in this paper, experiments on both small and large benchmark data sets are investigated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Analysis of the effectiveness of a training program for parents of children with ADHD in a hospital environment.

    PubMed

    Garreta, Esther; Jimeno, Teresa; Servera, Mateu

    2018-01-01

    Regarding the Attention Deficit Hyperactivity Disorder (ADHD), treatments combined with pharmacological, psychoeducational and parents training programs interventions are recommended. Parenting programs have been proven efficacy in the experimental area, but there is few data about their effectiveness and feasibility in the professional area. The objective of the study is to analyze the effectiveness of a parenting program implemented in a hospital setting to improve internalized and externalized behaviors as well as parenting styles in a sample of ADHD children. A training program for behavior management was applied to parents of 21 children with ADHD in a quasi-experimental pretest-posttest design, using measures from Child Behavior Checklist (CBCL) and Parenting Scale. Post-treatment data showed significant improvements specially on emotional, anxiety and oppositional defiant disorder measures. A significant but moderate improvement was found on ADHD, and non-significant on conduct problem measure. Additionally, there were moderate but significant improvements in parenting styles. Data support the effectiveness and feasibility of parent training programs for children with ADHD applied in hospital settings as they improve a large part of associated symptoms and parenting styles.

  8. The Influence of Art Expertise and Training on Emotion and Preference Ratings for Representational and Abstract Artworks.

    PubMed

    van Paasschen, Jorien; Bacci, Francesca; Melcher, David P

    2015-01-01

    Across cultures and throughout recorded history, humans have produced visual art. This raises the question of why people report such an emotional response to artworks and find some works more beautiful or compelling than others. In the current study we investigated the interplay between art expertise, and emotional and preference judgments. Sixty participants (40 novices, 20 art experts) rated a set of 150 abstract artworks and portraits during two occasions: in a laboratory setting and in a museum. Before commencing their second session, half of the art novices received a brief training on stylistic and art historical aspects of abstract art and portraiture. Results showed that art experts rated the artworks higher than novices on aesthetic facets (beauty and wanting), but no group differences were observed on affective evaluations (valence and arousal). The training session made a small effect on ratings of preference compared to the non-trained group of novices. Overall, these findings are consistent with the idea that affective components of art appreciation are less driven by expertise and largely consistent across observers, while more cognitive aspects of aesthetic viewing depend on viewer characteristics such as art expertise.

  9. Using beta binomials to estimate classification uncertainty for ensemble models.

    PubMed

    Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin

    2014-01-01

    Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.

  10. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2015-02-21

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  11. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    PubMed Central

    2011-01-01

    Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712

  12. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-20

    We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  13. A method for feature selection of APT samples based on entropy

    NASA Astrophysics Data System (ADS)

    Du, Zhenyu; Li, Yihong; Hu, Jinsong

    2018-05-01

    By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.

  14. Challenges to replicating evidence-based research in real-world settings: training African-American peers as patient navigators for colon cancer screening.

    PubMed

    Sly, Jamilia R; Jandorf, Lina; Dhulkifl, Rayhana; Hall, Diana; Edwards, Tiffany; Goodman, Adam J; Maysonet, Elithea; Azeez, Sulaiman

    2012-12-01

    Many cancer-prevention interventions have demonstrated effectiveness in diverse populations, but these evidenced-based findings slowly disseminate into practice. The current study describes the process of disseminating and replicating research (i.e., peer patient navigation for colonoscopy screening) in real-world settings. Two large metropolitan hospitals collaborated to replicate a peer patient navigation model within their existing navigation systems. Six African-American peer volunteers were recruited and trained to navigate patients through colonoscopy scheduling and completion. Major challenges included: (1) operating within multiple institutional settings; (2) operating within nonacademic/research infrastructures; (3) integrating into an established navigation system; (4) obtaining support of hospital staff without overburdening; and (5) competing priorities and time commitments. Bridging the gap between evidence-based research and practice is critical to eliminating many cancer health disparities; therefore, it is crucial that researchers and practitioners continue to work to achieve both diffusion and fusion of evidence-based findings. Recommendations for addressing these challenges are discussed.

  15. Portable automatic text classification for adverse drug reaction detection via multi-corpus training.

    PubMed

    Sarker, Abeed; Gonzalez, Graciela

    2015-02-01

    Automatic detection of adverse drug reaction (ADR) mentions from text has recently received significant interest in pharmacovigilance research. Current research focuses on various sources of text-based information, including social media-where enormous amounts of user posted data is available, which have the potential for use in pharmacovigilance if collected and filtered accurately. The aims of this study are: (i) to explore natural language processing (NLP) approaches for generating useful features from text, and utilizing them in optimized machine learning algorithms for automatic classification of ADR assertive text segments; (ii) to present two data sets that we prepared for the task of ADR detection from user posted internet data; and (iii) to investigate if combining training data from distinct corpora can improve automatic classification accuracies. One of our three data sets contains annotated sentences from clinical reports, and the two other data sets, built in-house, consist of annotated posts from social media. Our text classification approach relies on generating a large set of features, representing semantic properties (e.g., sentiment, polarity, and topic), from short text nuggets. Importantly, using our expanded feature sets, we combine training data from different corpora in attempts to boost classification accuracies. Our feature-rich classification approach performs significantly better than previously published approaches with ADR class F-scores of 0.812 (previously reported best: 0.770), 0.538 and 0.678 for the three data sets. Combining training data from multiple compatible corpora further improves the ADR F-scores for the in-house data sets to 0.597 (improvement of 5.9 units) and 0.704 (improvement of 2.6 units) respectively. Our research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks. Our experiments illustrate the benefits of incorporating various semantic features such as topics, concepts, sentiments, and polarities. Finally, we show that integration of information from compatible corpora can significantly improve classification performance. This form of multi-corpus training may be particularly useful in cases where data sets are heavily imbalanced (e.g., social media data), and may reduce the time and costs associated with the annotation of data in the future. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Portable Automatic Text Classification for Adverse Drug Reaction Detection via Multi-corpus Training

    PubMed Central

    Gonzalez, Graciela

    2014-01-01

    Objective Automatic detection of Adverse Drug Reaction (ADR) mentions from text has recently received significant interest in pharmacovigilance research. Current research focuses on various sources of text-based information, including social media — where enormous amounts of user posted data is available, which have the potential for use in pharmacovigilance if collected and filtered accurately. The aims of this study are: (i) to explore natural language processing approaches for generating useful features from text, and utilizing them in optimized machine learning algorithms for automatic classification of ADR assertive text segments; (ii) to present two data sets that we prepared for the task of ADR detection from user posted internet data; and (iii) to investigate if combining training data from distinct corpora can improve automatic classification accuracies. Methods One of our three data sets contains annotated sentences from clinical reports, and the two other data sets, built in-house, consist of annotated posts from social media. Our text classification approach relies on generating a large set of features, representing semantic properties (e.g., sentiment, polarity, and topic), from short text nuggets. Importantly, using our expanded feature sets, we combine training data from different corpora in attempts to boost classification accuracies. Results Our feature-rich classification approach performs significantly better than previously published approaches with ADR class F-scores of 0.812 (previously reported best: 0.770), 0.538 and 0.678 for the three data sets. Combining training data from multiple compatible corpora further improves the ADR F-scores for the in-house data sets to 0.597 (improvement of 5.9 units) and 0.704 (improvement of 2.6 units) respectively. Conclusions Our research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks. Our experiments illustrate the benefits of incorporating various semantic features such as topics, concepts, sentiments, and polarities. Finally, we show that integration of information from compatible corpora can significantly improve classification performance. This form of multi-corpus training may be particularly useful in cases where data sets are heavily imbalanced (e.g., social media data), and may reduce the time and costs associated with the annotation of data in the future. PMID:25451103

  17. An artificial neural network prediction model of congenital heart disease based on risk factors: A hospital-based case-control study.

    PubMed

    Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun

    2017-02-01

    An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio  = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.

  18. A Data Augmentation Approach to Short Text Classification

    ERIC Educational Resources Information Center

    Rosario, Ryan Robert

    2017-01-01

    Text classification typically performs best with large training sets, but short texts are very common on the World Wide Web. Can we use resampling and data augmentation to construct larger texts using similar terms? Several current methods exist for working with short text that rely on using external data and contexts, or workarounds. Our focus is…

  19. Are We "Experienced Listeners"? A Review of the Musical Capacities that Do Not Depend on Formal Musical Training

    ERIC Educational Resources Information Center

    Bigand, E.; Poulin-Charronnat, B.

    2006-01-01

    The present paper reviews a set of studies designed to investigate different aspects of the capacity for processing Western music. This includes perceiving the relationships between a theme and its variations, perceiving musical tensions and relaxations, generating musical expectancies, integrating local structures in large-scale structures,…

  20. An Empirical Study of Decisions Involving Post-Secondary Vocational School Training. Volume II--Technical Report. Final Report.

    ERIC Educational Resources Information Center

    Olson, Lawrence S.

    A study examined decisions involving private, postsecondary vocational schooling using three large, national sets of data (from the National Longitudinal Surveys) on males. Particular attention is paid to three target groups: inner-city and rural individuals and dropouts. Various equations were estimated using a life-cycle model of time allocation…

  1. An Empirical Study of Decisions Involving Post-Secondary Vocational School Training. Volume I--Executive Summary. Final Report.

    ERIC Educational Resources Information Center

    Olson, Lawrence S.

    A study examined decisions involving private, postsecondary vocational schooling using three large, national sets of data (from the National Longitudinal Surveys) on males. Particular attention is paid to three target groups--inner-city and rural individuals and dropouts. Various equations were estimated using a life-cycle model of time allocation…

  2. Virtual Civilian Aeromedical Evacuation Sustainment Training Project (V-CAEST)

    DTIC Science & Technology

    2015-08-01

    evacuation liaison team (AELT), and the mobile aeromedical staging facility (MASF). The content covered in the V-CAEST environment therefore covered the...environment was set-up in a large gymnasium building including a mock military plane and Mobile Aeromedical Staging Facility (MASF) located just...staffing exam backhoe scenarios exam infrastructure interface tsunami infrastructure commander telecommunication disrupting commander

  3. Are You Gainfully Employed? Setting Standards for For-Profit Degrees. Education Sector Reports

    ERIC Educational Resources Information Center

    Miller, Ben

    2010-01-01

    For-profit higher education institutions have grown by leaps and bounds in recent years, largely free of federal regulation. That freedom would be significantly curtailed if the gainful employment standard takes effect. Vocational training programs would be judged by the ratio of the debt that graduates assume relative to their current earnings…

  4. Improving Ambulatory Training in Internal Medicine: X + Y (or Why Not?).

    PubMed

    Ray, Alaka; Jones, Danielle; Palamara, Kerri; Overland, Maryann; Steinberg, Kenneth P

    2016-12-01

    The Accreditation Council for Graduate Medical Education (ACGME) requirement that internal medicine residents spend one-third of their training in an ambulatory setting has resulted in programmatic innovation across the country. The traditional weekly half-day clinic model has lost ground to the block or "X + Y" clinic model, which has gained in popularity for many reasons. Several disadvantages of the block model have been reported, however, and residency programs are caught between the threat of old and new challenges. We offer the perspectives of three large residency programs (University of Washington, Emory University, and Massachusetts General Hospital) that have successfully navigated scheduling challenges in our individual settings without implementing the block model. By sharing our innovative non-block models, we hope to demonstrate that programs can and should create the solution that fits their individual needs.

  5. Reduction in training time of a deep learning model in detection of lesions in CT

    NASA Astrophysics Data System (ADS)

    Makkinejad, Nazanin; Tajbakhsh, Nima; Zarshenas, Amin; Khokhar, Ashfaq; Suzuki, Kenji

    2018-02-01

    Deep learning (DL) emerged as a powerful tool for object detection and classification in medical images. Building a well-performing DL model, however, requires a huge number of images for training, and it takes days to train a DL model even on a cutting edge high-performance computing platform. This study is aimed at developing a method for selecting a "small" number of representative samples from a large collection of training samples to train a DL model for the could be used to detect polyps in CT colonography (CTC), without compromising the classification performance. Our proposed method for representative sample selection (RSS) consists of a K-means clustering algorithm. For the performance evaluation, we applied the proposed method to select samples for the training of a massive training artificial neural network based DL model, to be used for the classification of polyps and non-polyps in CTC. Our results show that the proposed method reduce the training time by a factor of 15, while maintaining the classification performance equivalent to the model trained using the full training set. We compare the performance using area under the receiveroperating- characteristic curve (AUC).

  6. Predicting Positive and Negative Relationships in Large Social Networks.

    PubMed

    Wang, Guan-Nan; Gao, Hui; Chen, Lian; Mensah, Dennis N A; Fu, Yan

    2015-01-01

    In a social network, users hold and express positive and negative attitudes (e.g. support/opposition) towards other users. Those attitudes exhibit some kind of binary relationships among the users, which play an important role in social network analysis. However, some of those binary relationships are likely to be latent as the scale of social network increases. The essence of predicting latent binary relationships have recently began to draw researchers' attention. In this paper, we propose a machine learning algorithm for predicting positive and negative relationships in social networks inspired by structural balance theory and social status theory. More specifically, we show that when two users in the network have fewer common neighbors, the prediction accuracy of the relationship between them deteriorates. Accordingly, in the training phase, we propose a segment-based training framework to divide the training data into two subsets according to the number of common neighbors between users, and build a prediction model for each subset based on support vector machine (SVM). Moreover, to deal with large-scale social network data, we employ a sampling strategy that selects small amount of training data while maintaining high accuracy of prediction. We compare our algorithm with traditional algorithms and adaptive boosting of them. Experimental results of typical data sets show that our algorithm can deal with large social networks and consistently outperforms other methods.

  7. Transfer and Use of Training Technology: A Model for Matching Training Approaches with Training Settings. Technical Report No. 74-24.

    ERIC Educational Resources Information Center

    Haverland, Edgar M.

    The report describes a project designed to facilitate the transfer and utilization of training technology by developing a model for evaluating training approaches or innovtions in relation to the requirements, resources, and constraints of specific training settings. The model consists of two parallel sets of open-ended questions--one set…

  8. Structural plasticity of the social brain: Differential change after socio-affective and cognitive mental training.

    PubMed

    Valk, Sofie L; Bernhardt, Boris C; Trautwein, Fynn-Mathis; Böckler, Anne; Kanske, Philipp; Guizard, Nicolas; Collins, D Louis; Singer, Tania

    2017-10-01

    Although neuroscientific research has revealed experience-dependent brain changes across the life span in sensory, motor, and cognitive domains, plasticity relating to social capacities remains largely unknown. To investigate whether the targeted mental training of different cognitive and social skills can induce specific changes in brain morphology, we collected longitudinal magnetic resonance imaging (MRI) data throughout a 9-month mental training intervention from a large sample of adults between 20 and 55 years of age. By means of various daily mental exercises and weekly instructed group sessions, training protocols specifically addressed three functional domains: (i) mindfulness-based attention and interoception, (ii) socio-affective skills (compassion, dealing with difficult emotions, and prosocial motivation), and (iii) socio-cognitive skills (cognitive perspective-taking on self and others and metacognition). MRI-based cortical thickness analyses, contrasting the different training modules against each other, indicated spatially diverging changes in cortical morphology. Training of present-moment focused attention mostly led to increases in cortical thickness in prefrontal regions, socio-affective training induced plasticity in frontoinsular regions, and socio-cognitive training included change in inferior frontal and lateral temporal cortices. Module-specific structural brain changes correlated with training-induced behavioral improvements in the same individuals in domain-specific measures of attention, compassion, and cognitive perspective-taking, respectively, and overlapped with task-relevant functional networks. Our longitudinal findings indicate structural plasticity in well-known socio-affective and socio-cognitive brain networks in healthy adults based on targeted short daily mental practices. These findings could promote the development of evidence-based mental training interventions in clinical, educational, and corporate settings aimed at cultivating social intelligence, prosocial motivation, and cooperation.

  9. Impact of operator experience and training strategy on procedural outcomes with leadless pacing: Insights from the Micra Transcatheter Pacing Study.

    PubMed

    El-Chami, Mikhael; Kowal, Robert C; Soejima, Kyoko; Ritter, Philippe; Duray, Gabor Z; Neuzil, Petr; Mont, Lluis; Kypta, Alexander; Sagi, Venkata; Hudnall, John Harrison; Stromberg, Kurt; Reynolds, Dwight

    2017-07-01

    Leadless pacemaker systems have been designed to avoid the need for a pocket and transvenous lead. However, delivery of this therapy requires a new catheter-based procedure. This study evaluates the role of operator experience and different training strategies on procedural outcomes. A total of 726 patients underwent implant attempt with the Micra transcatheter pacing system (TPS; Medtronic, Minneapolis, MN, USA) by 94 operators trained in a teaching laboratory using a simulator, cadaver, and large animal models (lab training) or locally at the hospital with simulator/demo model and proctorship (hospital training). Procedure success, procedure duration, fluoroscopy time, and safety outcomes were compared between training methods and experience (implant case number). The Micra TPS procedure was successful in 99.2% of attempts and did not differ between the 55 operators trained in the lab setting and the 39 operators trained locally at the hospital (P = 0.189). Implant case number was also not a determinant of procedural success (P = 0.456). Each operator performed between one and 55 procedures. Procedure time and fluoroscopy duration decreased by 2.0% (P = 0.002) and 3.2% (P < 0.001) compared to the previous case. Major complication rate and pericardial effusion rate were not associated with case number (P = 0.755 and P = 0.620, respectively). There were no differences in the safety outcomes by training method. Among a large group of operators, implantation success was high regardless of experience. While procedure duration and fluoroscopy times decreased with implant number, complications were low and not associated with case number. Procedure and safety outcomes were similar between distinct training methodologies. © 2017 Wiley Periodicals, Inc.

  10. Structural plasticity of the social brain: Differential change after socio-affective and cognitive mental training

    PubMed Central

    Valk, Sofie L.; Bernhardt, Boris C.; Trautwein, Fynn-Mathis; Böckler, Anne; Kanske, Philipp; Guizard, Nicolas; Collins, D. Louis; Singer, Tania

    2017-01-01

    Although neuroscientific research has revealed experience-dependent brain changes across the life span in sensory, motor, and cognitive domains, plasticity relating to social capacities remains largely unknown. To investigate whether the targeted mental training of different cognitive and social skills can induce specific changes in brain morphology, we collected longitudinal magnetic resonance imaging (MRI) data throughout a 9-month mental training intervention from a large sample of adults between 20 and 55 years of age. By means of various daily mental exercises and weekly instructed group sessions, training protocols specifically addressed three functional domains: (i) mindfulness-based attention and interoception, (ii) socio-affective skills (compassion, dealing with difficult emotions, and prosocial motivation), and (iii) socio-cognitive skills (cognitive perspective-taking on self and others and metacognition). MRI-based cortical thickness analyses, contrasting the different training modules against each other, indicated spatially diverging changes in cortical morphology. Training of present-moment focused attention mostly led to increases in cortical thickness in prefrontal regions, socio-affective training induced plasticity in frontoinsular regions, and socio-cognitive training included change in inferior frontal and lateral temporal cortices. Module-specific structural brain changes correlated with training-induced behavioral improvements in the same individuals in domain-specific measures of attention, compassion, and cognitive perspective-taking, respectively, and overlapped with task-relevant functional networks. Our longitudinal findings indicate structural plasticity in well-known socio-affective and socio-cognitive brain networks in healthy adults based on targeted short daily mental practices. These findings could promote the development of evidence-based mental training interventions in clinical, educational, and corporate settings aimed at cultivating social intelligence, prosocial motivation, and cooperation. PMID:28983507

  11. Material discovery by combining stochastic surface walking global optimization with a neural network.

    PubMed

    Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan

    2017-09-01

    While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.

  12. Large-scale urban point cloud labeling and reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  13. The effect of open kinetic chain knee extensor resistance training at different training loads on anterior knee laxity in the uninjured.

    PubMed

    Barcellona, Massimo G; Morrissey, Matthew C

    2016-04-01

    The commonly used open kinetic chain knee extensor (OKCKE) exercise loads the sagittal restraints to knee anterior tibial translation. To investigate the effect of different loads of OKCKE resistance training on anterior knee laxity (AKL) in the uninjured knee. non-clinical trial. Randomization into one of three supervised training groups occurred with training 3 times per week for 12 weeks. Subjects in the LOW and HIGH groups performed OKCKE resistance training at loads of 2 sets of 20 repetition maximum (RM) and 20 sets of 2RM, respectively. Subjects in the isokinetic training group (ISOK) performed isokinetic OKCKE resistance training using 2 sets of 20 maximal efforts. AKL was measured using the KT2000 arthrometer with concurrent measurement of lateral hamstrings muscle activity at baseline, 6 weeks and 12 weeks. Twenty six subjects participated (LOW n = 9, HIGH n = 10, ISOK n = 7). The main finding from this study is that a 12-week OKCKE resistance training programme at loads of 20 sets of 2RM, leads to an increase in manual maximal AKL. OKCKE resistance training at high loads (20 sets of 2RM) increases AKL while low load OKCKE resistance training (2 sets of 20RM) and isokinetic OKCKE resistance training at 2 sets of 20RM does not. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Low Cost Simulator for Heart Surgery Training

    PubMed Central

    Silva, Roberto Rocha e; Lourenção, Artur; Goncharov, Maxim; Jatene, Fabio B.

    2016-01-01

    Objective Introduce the low-cost and easy to purchase simulator without biological material so that any institution may promote extensive cardiovascular surgery training both in a hospital setting and at home without large budgets. Methods A transparent plastic box is placed in a wooden frame, which is held by the edges using elastic bands, with the bottom turned upwards, where an oval opening is made, "simulating" a thoracotomy. For basic exercises in the aorta, the model presented by our service in the 2015 Brazilian Congress of Cardiovascular Surgery: a silicone ice tray, where one can train to make aortic purse-string suture, aortotomy, aortorrhaphy and proximal and distal anastomoses. Simulators for the training of valve replacement and valvoplasty, atrial septal defect repair and aortic diseases were added. These simulators are based on sewage pipes obtained in construction material stores and the silicone trays and ethyl vinyl acetate tissue were obtained in utility stores, all of them at a very low cost. Results The models were manufactured using inert materials easily found in regular stores and do not present contamination risk. They may be used in any environment and maybe stored without any difficulties. This training enabled young surgeons to familiarize and train different surgical techniques, including procedures for aortic diseases. In a subjective assessment, these surgeons reported that the training period led to improved surgical techniques in the surgical field. Conclusion The model described in this protocol is effective and low-cost when compared to existing simulators, enabling a large array of cardiovascular surgery training. PMID:28076623

  15. Short-term adaptations following Complex Training in team-sports: A meta-analysis

    PubMed Central

    Martinez-Rodriguez, Alejandro; Calleja-González, Julio; Alcaraz, Pedro E.

    2017-01-01

    Objective The purpose of this meta-analysis was to study the short-term adaptations on sprint and vertical jump (VJ) performance following Complex Training (CT) in team-sports. CT is a resistance training method aimed at developing both strength and power, which has a direct effect on sprint and VJ. It consists on alternating heavy resistance training exercises with plyometric/power ones, set for set, on the same workout. Methods A search of electronic databases up to July 2016 (PubMed-MEDLINE, SPORTDiscus, Web of Knowledge) was conducted. Inclusion criteria: 1) at least one CT intervention group; 2) training protocols ≥4-wks; 3) sample of team-sport players; 4) sprint or VJ as an outcome variable. Effect sizes (ES) of each intervention were calculated and subgroup analyses were performed. Results A total of 9 studies (13 CT groups) met the inclusion criteria. Medium effect sizes (ES) (ES = 0.73) were obtained for pre-post improvements in sprint, and small (ES = 0.41) in VJ, following CT. Experimental-groups presented better post-intervention sprint (ES = 1.01) and VJ (ES = 0.63) performance than control-groups. Sprint large ESs were exhibited in younger athletes (<20 years old; ES = 1.13); longer CT interventions (≥6 weeks; ES = 0.95); conditioning activities with intensities ≤85% 1RM (ES = 0.96) and protocols with frequencies of <3 sessions/week (ES = 0.84). Medium ESs were obtained in Division I players (ES = 0.76); training programs >12 total sessions (ES = 0.74). VJ Large ESs in programs with >12 total sessions (ES = 0.81). Medium ESs obtained for under-Division I individuals (ES = 0.56); protocols with intracomplex rest intervals ≥2 min (ES = 0.55); conditioning activities with intensities ≤85% 1RM (ES = 0.64); basketball/volleyball players (ES = 0.55). Small ESs were found for younger athletes (ES = 0.42); interventions ≥6 weeks (ES = 0.45). Conclusions CT interventions have positive medium effects on sprint performance and small effects on VJ in team-sport athletes. This training method is a suitable option to include in the season planning. PMID:28662108

  16. Managing military training-related environmental disturbance.

    PubMed

    Zentelis, Rick; Banks, Sam; Roberts, J Dale; Dovers, Stephen; Lindenmayer, David

    2017-12-15

    Military Training Areas (MTAs) cover at least 2 percent of the Earth's terrestrial surface and occur in all major biomes. These areas are potentially important for biodiversity conservation. The greatest challenge in managing MTAs is balancing the disturbance associated with military training and environmental values. These challenges are unique as no other land use is managed for these types of anthropogenic disturbances in a natural setting. We investigated how military training-related disturbance is best managed on MTAs. Specifically, we explored management options to maximise the amount of military training that can be undertaken on a MTA while minimising the amount of environmental disturbance. MTAs comprise of a number of ranges designed to facilitate different types of military training. We simulated military training-related environmental disturbance at different range usage rates under a typical range rotation use strategy, and compared the results to estimated ecosystem recovery rates from training activities. We found that even at relatively low simulated usage rates, random allocation and random spatial use of training ranges within an MTA resulted in environmental degradation under realistic ecological recovery rates. To avoid large scale environmental degradation, we developed a decision-making tool that details the best method for managing training-related disturbance by determining how training activities can be allocated to training ranges. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  18. Distribution-Preserving Stratified Sampling for Learning Problems.

    PubMed

    Cervellera, Cristiano; Maccio, Danilo

    2017-06-09

    The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.

  19. The effect of inter-set rest intervals on resistance exercise-induced muscle hypertrophy.

    PubMed

    Henselmans, Menno; Schoenfeld, Brad J

    2014-12-01

    Due to a scarcity of longitudinal trials directly measuring changes in muscle girth, previous recommendations for inter-set rest intervals in resistance training programs designed to stimulate muscular hypertrophy were primarily based on the post-exercise endocrinological response and other mechanisms theoretically related to muscle growth. New research regarding the effects of inter-set rest interval manipulation on resistance training-induced muscular hypertrophy is reviewed here to evaluate current practices and provide directions for future research. Of the studies measuring long-term muscle hypertrophy in groups employing different rest intervals, none have found superior muscle growth in the shorter compared with the longer rest interval group and one study has found the opposite. Rest intervals less than 1 minute can result in acute increases in serum growth hormone levels and these rest intervals also decrease the serum testosterone to cortisol ratio. Long-term adaptations may abate the post-exercise endocrinological response and the relationship between the transient change in hormonal production and chronic muscular hypertrophy is highly contentious and appears to be weak. The relationship between the rest interval-mediated effect on immune system response, muscle damage, metabolic stress, or energy production capacity and muscle hypertrophy is still ambiguous and largely theoretical. In conclusion, the literature does not support the hypothesis that training for muscle hypertrophy requires shorter rest intervals than training for strength development or that predetermined rest intervals are preferable to auto-regulated rest periods in this regard.

  20. Local or global? How to choose the training set for principal component compression of hyperspectral satellite measurements: a hybrid approach

    NASA Astrophysics Data System (ADS)

    Hultberg, Tim; August, Thomas; Lenti, Flavia

    2017-09-01

    Principal Component (PC) compression is the method of choice to achieve band-width reduction for dissemination of hyper spectral (HS) satellite measurements and will become increasingly important with the advent of future HS missions (such as IASI-NG and MTG-IRS) with ever higher data-rates. It is a linear transformation defined by a truncated set of the leading eigenvectors of the covariance of the measurements as well as the mean of the measurements. We discuss the strategy for generation of the eigenvectors, based on the operational experience made with IASI. To compute the covariance and mean, a so-called training set of measurements is needed, which ideally should include all relevant spectral features. For the dissemination of IASI PC scores a global static training set consisting of a large sample of measured spectra covering all seasons and all regions is used. This training set was updated once after the start of the dissemination of IASI PC scores in April 2010 by adding spectra from the 2010 Russian wildfires, in which spectral features not captured by the previous training set were identified. An alternative approach, which has sometimes been proposed, is to compute the eigenvectors on the fly from a local training set, for example consisting of all measurements in the current processing granule. It might naively be thought that this local approach would improve the compression rate by reducing the number of PC scores needed to represent the measurements within each granule. This false belief is apparently confirmed, if the reconstruction scores (root mean square of the reconstruction residuals) is used as the sole criteria for choosing the number of PC scores to retain, which would overlook the fact that the decrease in reconstruction score (for the same number of PCs) is achieved only by the retention of an increased amount of random noise. We demonstrate that the local eigenvectors retain a higher amount of noise and a lower amount of atmospheric signal than global eigenvectors. Local eigenvectors do not increase the compression rate, but increase the amount of atmospheric loss and should be avoided. Only extremely rare situations, resulting in spectra with features which have not been observed previously, can lead to problems for the global approach. To cope with such situations we investigate a hybrid approach, which first apply the global eigenvectors and then apply local compression to the residuals in order to identify and disseminate in addition any directions in the local signal, which are orthogonal to the subspace spanned by the global eigenvectors.

  1. Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space

    PubMed Central

    Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred

    2016-01-01

    Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112

  2. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.

    PubMed

    Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling

    2015-11-01

    In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.

  3. Quantitative analysis of single- vs. multiple-set programs in resistance training.

    PubMed

    Wolfe, Brian L; LeMura, Linda M; Cole, Phillip J

    2004-02-01

    The purpose of this study was to examine the existing research on single-set vs. multiple-set resistance training programs. Using the meta-analytic approach, we included studies that met the following criteria in our analysis: (a) at least 6 subjects per group; (b) subject groups consisting of single-set vs. multiple-set resistance training programs; (c) pretest and posttest strength measures; (d) training programs of 6 weeks or more; (e) apparently "healthy" individuals free from orthopedic limitations; and (f) published studies in English-language journals only. Sixteen studies generated 103 effect sizes (ESs) based on a total of 621 subjects, ranging in age from 15-71 years. Across all designs, intervention strategies, and categories, the pretest to posttest ES in muscular strength was (chi = 1.4 +/- 1.4; 95% confidence interval, 0.41-3.8; p < 0.001). The results of 2 x 2 analysis of variance revealed simple main effects for age, training status (trained vs. untrained), and research design (p < 0.001). No significant main effects were found for sex, program duration, and set end point. Significant interactions were found for training status and program duration (6-16 weeks vs. 17-40 weeks) and number of sets performed (single vs. multiple). The data indicated that trained individuals performing multiple sets generated significantly greater increases in strength (p < 0.001). For programs with an extended duration, multiple sets were superior to single sets (p < 0.05). This quantitative review indicates that single-set programs for an initial short training period in untrained individuals result in similar strength gains as multiple-set programs. However, as progression occurs and higher gains are desired, multiple-set programs are more effective.

  4. Improving machine learning reproducibility in genetic association studies with proportional instance cross validation (PICV).

    PubMed

    Piette, Elizabeth R; Moore, Jason H

    2018-01-01

    Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.

  5. Quantitative Analysis in the General Chemistry Laboratory: Training Students to Analyze Individual Results in the Context of Collective Data

    ERIC Educational Resources Information Center

    Ling, Chris D.; Bridgeman, Adam J.

    2011-01-01

    Titration experiments are ideal for generating large data sets for use in quantitative-analysis activities that are meaningful and transparent to general chemistry students. We report the successful implementation of a sophisticated quantitative exercise in which the students identify a series of unknown acids by determining their molar masses…

  6. Should Millions of Students Take a Gap Year? Large Numbers of Students Start the School Year above Grade Level

    ERIC Educational Resources Information Center

    Peters, Scott J.; Rambo-Hernandez, Karen; Makel, Matthew C.; Matthews, Michael S.; Plucker, Jonathan A.

    2017-01-01

    Few topics have garnered more attention in preservice teacher training and educational reform than student diversity and its influence on learning. However, the actual degree of cognitive diversity has yet to be considered regarding instructional implications for advanced learners. We used four data sets (three state-level and one national) from…

  7. Contextual Factors that Foster or Inhibit Para-Teacher Professional Development: The Case of an Indian, Non-Governmental Organization

    ERIC Educational Resources Information Center

    Raval, Harini; McKenney, Susan; Pieters, Jules

    2012-01-01

    The appointment of para-professionals to overcome skill shortages and/or make efficient use of expensive resources is well established in both developing and developed countries. The present research concerns para-teachers in India. The literature on para-teachers is dominated by training for special needs settings, largely in developed societies.…

  8. Long Ago and Far Away: Preservice Teachers' (Mis)conceptions Surrounding Racism

    ERIC Educational Resources Information Center

    Wilson, Melissa Beth; Kumar, Tracey

    2017-01-01

    This study examines a large data set of preservice teachers' definitions of racism at the beginning and at the end of a teacher training program in the Southeastern United States. Using the methodology of Critical Content Analysis that is grounded in Critical Race Theory, the authors found that the majority of the definitions illustrate a removed,…

  9. Observations of double layer-like and soliton-like structures in the ionosphere

    NASA Technical Reports Server (NTRS)

    Boehm, M. H.; Carlson, C. W.; Mcfadden, J.; Mozer, F. S.

    1984-01-01

    Two types of large electric field signatures, individual pulses and pulse trains, were observed on a sounding rocket launched into the afternoon auroral zone on January 21, 1982. The typical electric fields in the individual pulses were 50 mV/m or larger, aligned mostly parallel to B, and the corresponding potentials were at leat 100 mV (kT approximately 0.3 eV). A lower limit of 15 km/sec can be set on the velocity of these structures, indicating that they were not ion acoustic double layers. The pulse trains, each consisting of on the order of 100 pulses, were observed in close association with intense plasma frequency waves. This correlation is consistent with the interpretation of these trains as Langmuir solitons. The pulse trains correlate better with the intensity of the field-aligned currents than with the energetic electron flux.

  10. Inservice Training of Primary Teachers Through Interactive Video Technology: An Indian Experience

    NASA Astrophysics Data System (ADS)

    Maheshwari, A. N.; Raina, V. K.

    1998-01-01

    India has yet to achieve elementary education for all children. Among the centrally sponsored initiatives to improve education are Operation Blackboard, to provide sufficient teachers and buildings, Minimum Levels of Learning, which set achievement targets, and the Special Orientation Programme for Primary School Teachers (SOPT). This article focuses on the last of these and describes the new technology used to train teachers so that the losses in transmission inherent in the cascade model are avoided. Interactive Video Technology involving the Indira Gandhi Open University and the Indian Space Research Organisation was used experimentally in seven-day training courses for primary school teachers in 20 centres in Karnataka State, providing one-way video transmissions and telephone feedback to experts from the centres. The responses from teachers and their trainers indicate considerable potential for the exploitation of new technology where large numbers of teachers require training.

  11. Evaluation of a parallel implementation of the learning portion of the backward error propagation neural network: experiments in artifact identification.

    PubMed Central

    Sittig, D. F.; Orr, J. A.

    1991-01-01

    Various methods have been proposed in an attempt to solve problems in artifact and/or alarm identification including expert systems, statistical signal processing techniques, and artificial neural networks (ANN). ANNs consist of a large number of simple processing units connected by weighted links. To develop truly robust ANNs, investigators are required to train their networks on huge training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from approximately 6.4 hours to 1.5 hours. We conclude that use of the master-worker model of parallel computation is an excellent method for obtaining speedups in the backward error propagation neural network training algorithm. PMID:1807607

  12. Clinical Psychology Training: Accreditation and Beyond.

    PubMed

    Levenson, Robert W

    2017-05-08

    Beginning with efforts in the late 1940s to ensure that clinical psychologists were adequately trained to meet the mental health needs of the veterans of World War II, the accreditation of clinical psychologists has largely been the province of the Commission on Accreditation of the American Psychological Association. However, in 2008 the Psychological Clinical Science Accreditation System began accrediting doctoral programs that adhere to the clinical science training model. This review discusses the goals of accreditation and the history of the accreditation of graduate programs in clinical psychology, and provides an overview of the evaluation procedures used by these two systems. Accreditation is viewed against the backdrop of the slow rate of progress in reducing the burden of mental illness and the changes in clinical psychology training that might help improve this situation. The review concludes with a set of five recommendations for improving accreditation.

  13. Assembly and evaluation of a training module and dataset with feedback for improved interpretation of digital breast tomosynthesis examinations

    NASA Astrophysics Data System (ADS)

    Gur, David; Zuley, Margarita L.; Sumkin, Jules H.; Hakim, Christiane M.; Chough, Denise M.; Lovy, Linda; Sobran, Cynthia; Logue, Durwin; Zheng, Bin; Klym, Amy H.

    2012-02-01

    The FDA recently approved Digital Breast Tomosynthesis (DBT) for use in screening for the early detection of breast cancer. However, MQSA qualification for interpreting DBT through training was noted as important. Performance issues related to training are largely unknown. Therefore, we assembled a unique computerized training module to assess radiologists' performances before and after using the training module. Seventy-one actual baseline mammograms (no priors) with FFDM and DBT images were assembled to be read before and after training with the developed module. Fifty examinations of FFDM and DBT images enriched with positive findings were assembled for the training module. Depicted findings were carefully reviewed, summarized, and entered into a specially designed training database where findings were identified by case number and synchronized to the display of the related FFDM plus DBT examinations on a clinical workstation. Readers reported any findings using screening BIRADS (0, 1, or 2) followed by instantaneous feedback of the verified truth. Six radiologists participated in the study and reader average sensitivity and specificity were compared before and after training. Average sensitivity improved and specificity remained relatively the same after training. Performance changes may be affected by disease prevalence in the training set.

  14. Failure of Working Memory Training to Enhance Cognition or Intelligence

    PubMed Central

    Thompson, Todd W.; Waskom, Michael L.; Garel, Keri-Lee A.; Cardenas-Iniguez, Carlos; Reynolds, Gretchen O.; Winter, Rebecca; Chang, Patricia; Pollard, Kiersten; Lala, Nupur; Alvarez, George A.; Gabrieli, John D. E.

    2013-01-01

    Fluid intelligence is important for successful functioning in the modern world, but much evidence suggests that fluid intelligence is largely immutable after childhood. Recently, however, researchers have reported gains in fluid intelligence after multiple sessions of adaptive working memory training in adults. The current study attempted to replicate and expand those results by administering a broad assessment of cognitive abilities and personality traits to young adults who underwent 20 sessions of an adaptive dual n-back working memory training program and comparing their post-training performance on those tests to a matched set of young adults who underwent 20 sessions of an adaptive attentional tracking program. Pre- and post-training measurements of fluid intelligence, standardized intelligence tests, speed of processing, reading skills, and other tests of working memory were assessed. Both training groups exhibited substantial and specific improvements on the trained tasks that persisted for at least 6 months post-training, but no transfer of improvement was observed to any of the non-trained measurements when compared to a third untrained group serving as a passive control. These findings fail to support the idea that adaptive working memory training in healthy young adults enhances working memory capacity in non-trained tasks, fluid intelligence, or other measures of cognitive abilities. PMID:23717453

  15. A comparative study of two hazard handling training methods for novice drivers.

    PubMed

    Wang, Y B; Zhang, W; Salvendy, G

    2010-10-01

    The effectiveness of two hazard perception training methods, simulation-based error training (SET) and video-based guided error training (VGET), for novice drivers' hazard handling performance was tested, compared, and analyzed. Thirty-two novice drivers participated in the hazard perception training. Half of the participants were trained using SET by making errors and/or experiencing accidents while driving with a desktop simulator. The other half were trained using VGET by watching prerecorded video clips of errors and accidents that were made by other people. The two groups had exposure to equal numbers of errors for each training scenario. All the participants were tested and evaluated for hazard handling on a full cockpit driving simulator one week after training. Hazard handling performance and hazard response were measured in this transfer test. Both hazard handling performance scores and hazard response distances were significantly better for the SET group than the VGET group. Furthermore, the SET group had more metacognitive activities and intrinsic motivation. SET also seemed more effective in changing participants' confidence, but the result did not reach the significance level. SET exhibited a higher training effectiveness of hazard response and handling than VGET in the simulated transfer test. The superiority of SET might benefit from the higher levels of metacognition and intrinsic motivation during training, which was observed in the experiment. Future research should be conducted to assess whether the advantages of error training are still effective under real road conditions.

  16. Set Shifting Training with Categorization Tasks

    PubMed Central

    Soveri, Anna; Waris, Otto; Laine, Matti

    2013-01-01

    The very few cognitive training studies targeting an important executive function, set shifting, have reported performance improvements that also generalized to untrained tasks. The present randomized controlled trial extends set shifting training research by comparing previously used cued training with uncued training. A computerized adaptation of the Wisconsin Card Sorting Test was utilized as the training task in a pretest-posttest experimental design involving three groups of university students. One group received uncued training (n = 14), another received cued training (n = 14) and the control group (n = 14) only participated in pre- and posttests. The uncued training group showed posttraining performance increases on their training task, but neither training group showed statistically significant transfer effects. Nevertheless, comparison of effect sizes for transfer effects indicated that our results did not differ significantly from the previous studies. Our results suggest that the cognitive effects of computerized set shifting training are mostly task-specific, and would preclude any robust generalization effects with this training. PMID:24324717

  17. A FASTQ compressor based on integer-mapped k-mer indexing for biologist.

    PubMed

    Zhang, Yeting; Patel, Khyati; Endrawis, Tony; Bowers, Autumn; Sun, Yazhou

    2016-03-15

    Next generation sequencing (NGS) technologies have gained considerable popularity among biologists. For example, RNA-seq, which provides both genomic and functional information, has been widely used by recent functional and evolutionary studies, especially in non-model organisms. However, storing and transmitting these large data sets (primarily in FASTQ format) have become genuine challenges, especially for biologists with little informatics experience. Data compression is thus a necessity. KIC, a FASTQ compressor based on a new integer-mapped k-mer indexing method, was developed (available at http://www.ysunlab.org/kic.jsp). It offers high compression ratio on sequence data, outstanding user-friendliness with graphic user interfaces, and proven reliability. Evaluated on multiple large RNA-seq data sets from both human and plants, it was found that the compression ratio of KIC had exceeded all major generic compressors, and was comparable to those of the latest dedicated compressors. KIC enables researchers with minimal informatics training to take advantage of the latest sequence compression technologies, easily manage large FASTQ data sets, and reduce storage and transmission cost. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Quality of clinical brain tumor MR spectra judged by humans and machine learning tools.

    PubMed

    Kyathanahally, Sreenath P; Mocioiu, Victor; Pedrosa de Barros, Nuno; Slotboom, Johannes; Wright, Alan J; Julià-Sapé, Margarida; Arús, Carles; Kreis, Roland

    2018-05-01

    To investigate and compare human judgment and machine learning tools for quality assessment of clinical MR spectra of brain tumors. A very large set of 2574 single voxel spectra with short and long echo time from the eTUMOUR and INTERPRET databases were used for this analysis. Original human quality ratings from these studies as well as new human guidelines were used to train different machine learning algorithms for automatic quality control (AQC) based on various feature extraction methods and classification tools. The performance was compared with variance in human judgment. AQC built using the RUSBoost classifier that combats imbalanced training data performed best. When furnished with a large range of spectral and derived features where the most crucial ones had been selected by the TreeBagger algorithm it showed better specificity (98%) in judging spectra from an independent test-set than previously published methods. Optimal performance was reached with a virtual three-class ranking system. Our results suggest that feature space should be relatively large for the case of MR tumor spectra and that three-class labels may be beneficial for AQC. The best AQC algorithm showed a performance in rejecting spectra that was comparable to that of a panel of human expert spectroscopists. Magn Reson Med 79:2500-2510, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Assessing the impact of short-term surgical education on practice: a retrospective study of the introduction of mesh for inguinal hernia repair in sub-Saharan Africa.

    PubMed

    Wang, Y T; Meheš, M M; Naseem, H-R; Ibrahim, M; Butt, M A; Ahmed, N; Wahab Bin Adam, M A; Issah, A-W; Mohammed, I; Goldstein, S D; Cartwright, K; Abdullah, F

    2014-08-01

    Inguinal hernia repair is the most common general surgery operation performed globally. However, the adoption of tension-free hernia repair with mesh has been limited in low-income settings, largely due to a lack of technical training and resources. The present study evaluates the impact of a 2-day training course instructing use of polypropylene mesh for inguinal hernia repair on the practice patterns of sub-Saharan African physicians. A surgical training course on tension-free mesh repair of hernias was provided to 16 physicians working in rural Ghanaian and Liberian hospitals. Three physicians were requested to prospectively record all their inguinal hernia surgeries, performed with or without mesh, during the 14-month period following the training. Demographic variables, diagnoses, and complications were collected by an independent data collector for mesh and non-mesh procedures. Surgery with mesh increased significantly following intervention, from near negligible levels prior to the training to 8.1 % of all inguinal hernia repairs afterwards. Mesh repair accounted for 90.8 % of recurrent hernia repairs and 2.9 % of primary hernia repairs after training. Overall complication rates between mesh and non-mesh procedures were not significantly different (p = 0.20). Three physicians who participated in an intensive education course were routinely using mesh for inguinal hernia repair 14 months after the training. This represents a significant change in practice pattern. Complication rates between patients who underwent inguinal hernia repairs with and without mesh were comparable. The present study provides evidence that short-term surgical training initiatives can have a substantial impact on local healthcare practice in resource-limited settings.

  20. Leadership skills teaching in Yorkshire & the Humber - a survey: uncovering, sharing, developing, embedding.

    PubMed

    Fowler, Iolanthe; Gill, Andy

    2015-09-01

    Medical leadership is a hot topic, but it is not known yet how to teach this most effectively. A working party of educators in Yorkshire and the Humber (Y&H) studied the leadership domains, as set out in the Medical Leadership Competency Framework and from this distilled a set of 'trainable' leadership skills, which were felt to be important to teach during general practitioner (GP) training. A questionnaire was sent out to a large GP educational community (educators and trainees) within Y&H to establish the following: (i) whether the distilled skills were thought to have face validity when applied to the concept of leadership, (ii) what was the relative importance of these skills in relation to each other and (iii) the degree to which these skills were already being taught in practice placements and at General Practice Specialty Training Programme (GPSTP) teaching sessions.Educators reported more teaching and training occurring than trainees reported receiving, and the relative importance of the skills sets were different between educators and trainees. It was evident that leadership skills are currently being taught, but that making training explicitly 'leadership', and raising the importance of leadership skills in GP, may address some of these imbalances. Educators requested guidance on how to teach these skills effectively and commented that many existing opportunities for leadership teaching and training are not well recognised or used. Routinely and regularly offering the chance for trainees at all levels to be exposed to leadership skills by role modelling, making use of everyday opportunities in practice to teach and encouraging trainee involvement in projects and opportunities to practice new skills can facilitate the acquisition and celebration of mastery of generic leadership skills.

  1. Prediction of near-surface soil moisture at large scale by digital terrain modeling and neural networks.

    PubMed

    Lavado Contador, J F; Maneta, M; Schnabel, S

    2006-10-01

    The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.

  2. Passport Officers’ Errors in Face Matching

    PubMed Central

    White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682

  3. Passport officers' errors in face matching.

    PubMed

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  4. Implementing digital skills training in care homes: a literature review.

    PubMed

    Wild, Deidre; Kydd, Angela; Szczepura, Ala

    2016-05-01

    This article is the first of a two-part series that informs and describes digital skills training using a dedicated console computer provided for staff and residents in a care home setting. This was part of a programme of culture change in a large care home with nursing in Glasgow, Scotland. The literature review shows that over the past decade there has been a gradual increase in the use of digital technology by staff and older people in community settings including care homes. Policy from the European Commission presents a persuasive argument for the advancement of technology-enabled care to counter the future impact of an increased number of people of advanced age on finite health and social care resources. The psychosocial and environmental issues that inhibit or enhance the acquisition of digital skills in care homes are considered and include the identification of exemplar schemes and the support involved.

  5. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  6. Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.

    PubMed

    Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong

    2016-06-01

    Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. VO2 responses to intermittent swimming sets at velocity associated with VO2max.

    PubMed

    Libicz, Sebastien; Roels, Belle; Millet, Gregoire P

    2005-10-01

    While the physiological adaptations following endurance training are relatively well understood, in swimming there is a dearth of knowledge regarding the metabolic responses to interval training (IT). The hypothesis tested predicted that two different endurance swimming IT sets would induce differences in the total time the subjects swam at a high percentage of maximal oxygen consumption (VO(2)max). Ten trained triathletes underwent an incremental test to exhaustion in swimming so that the swimming velocity associated with VO(2)max (vVO(2)max) could be determined. This was followed by a maximal 400-m test and two intermittent sets at vVO(2)max: (a) 16 x 50 m with 15-s rest (IT(50)); (b) 8 x 100 m with 30-s rest (IT(100)). The times sustained above 95% VO(2)max (68.50 +/- 62.69 vs. 145.01 +/- 165.91 sec) and 95% HRmax (146.67 +/- 131.99 vs. 169.78 +/- 203.45 sec, p = 0.54) did not differ between IT(50) and IT(100)(values are mean +/- SD). In conclusion, swimming IT sets of equal time duration at vVO(2)max but of differing work-interval durations led to slightly different VO(2)and HR responses. The time spent above 95% of VO(2)max was twice as long in IT(100) as in IT (50), and a large variability between mean VO(2)and HR values was also observed.

  8. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  9. Neural net applied to anthropological material: a methodical study on the human nasal skeleton.

    PubMed

    Prescher, Andreas; Meyers, Anne; Gerf von Keyserlingk, Diedrich

    2005-07-01

    A new information processing method, an artificial neural net, was applied to characterise the variability of anthropological features of the human nasal skeleton. The aim was to find different types of nasal skeletons. A neural net with 15*15 nodes was trained by 17 standard anthropological parameters taken from 184 skulls of the Aachen collection. The trained neural net delivers its classification in a two-dimensional map. Different types of noses were locally separated within the map. Rare and frequent types may be distinguished after one passage of the complete collection through the net. Statistical descriptive analysis, hierarchical cluster analysis, and discriminant analysis were applied to the same data set. These parallel applications allowed comparison of the new approach to the more traditional ones. In general the classification by the neural net is in correspondence with cluster analysis and discriminant analysis. However, it goes beyond these classifications because of the possibility of differentiating the types in multi-dimensional dependencies. Furthermore, places in the map are kept blank for intermediate forms, which may be theoretically expected, but were not included in the training set. In conclusion, the application of a neural network is a suitable method for investigating large collections of biological material. The gained classification may be helpful in anatomy and anthropology as well as in forensic medicine. It may be used to characterise the peculiarity of a whole set as well as to find particular cases within the set.

  10. Student beats the teacher: deep neural networks for lateral ventricles segmentation in brain MR

    NASA Astrophysics Data System (ADS)

    Ghafoorian, Mohsen; Teuwen, Jonas; Manniesing, Rashindra; Leeuw, Frank-Erik d.; van Ginneken, Bram; Karssemeijer, Nico; Platel, Bram

    2018-03-01

    Ventricular volume and its progression are known to be linked to several brain diseases such as dementia and schizophrenia. Therefore accurate measurement of ventricle volume is vital for longitudinal studies on these disorders, making automated ventricle segmentation algorithms desirable. In the past few years, deep neural networks have shown to outperform the classical models in many imaging domains. However, the success of deep networks is dependent on manually labeled data sets, which are expensive to acquire especially for higher dimensional data in the medical domain. In this work, we show that deep neural networks can be trained on muchcheaper-to-acquire pseudo-labels (e.g., generated by other automated less accurate methods) and still produce more accurate segmentations compared to the quality of the labels. To show this, we use noisy segmentation labels generated by a conventional region growing algorithm to train a deep network for lateral ventricle segmentation. Then on a large manually annotated test set, we show that the network significantly outperforms the conventional region growing algorithm which was used to produce the training labels for the network. Our experiments report a Dice Similarity Coefficient (DSC) of 0.874 for the trained network compared to 0.754 for the conventional region growing algorithm (p < 0.001).

  11. Training in clinical ethics consultation: the Washington Hospital Center course.

    PubMed

    Spike, Jeffrey P

    2012-01-01

    How can one be trained to enter the evolving field of clinical ethics consultation? The classroom is not the proper place to teach clinical ethics consultation; it is best done in a clinical setting. The author maps the elements that might be included in an apprenticeship, and sets out propositions for debate regarding the training needed for clinical ethics consultants and directors of clinical ethics consultation services. I was invited to be an observer of the first Intensive Course in Clinical Ethics at the Washington Hospital Center (WHC). I had no input into the planning. Having been present at a meeting of the Clinical Ethics Consultation Affinity Group of the American Society of Bioethics and Humanities (ASBH) when the issue of a lack of training programs was discussed, I was acutely aware of the need. Knowing how popular the various four-day intensive courses in bioethics have been, held at Georgetown University first, and then in Seattle and locations in the Midwest, it seemed time to have a four-day intensive course that was devoted to clinical ethics. The differences between bioethics and clinical ethics is substantial and largely unappreciated by those in bioethics. So when the WHC team agreed to take on the task of offering an intensive in clinical ethics, it was an important step for the field.

  12. Spike-train communities: finding groups of similar spike trains.

    PubMed

    Humphries, Mark D

    2011-02-09

    Identifying similar spike-train patterns is a key element in understanding neural coding and computation. For single neurons, similar spike patterns evoked by stimuli are evidence of common coding. Across multiple neurons, similar spike trains indicate potential cell assemblies. As recording technology advances, so does the urgent need for grouping methods to make sense of large-scale datasets of spike trains. Existing methods require specifying the number of groups in advance, limiting their use in exploratory analyses. I derive a new method from network theory that solves this key difficulty: it self-determines the maximum number of groups in any set of spike trains, and groups them to maximize intragroup similarity. This method brings us revealing new insights into the encoding of aversive stimuli by dopaminergic neurons, and the organization of spontaneous neural activity in cortex. I show that the characteristic pause response of a rat's dopaminergic neuron depends on the state of the superior colliculus: when it is inactive, aversive stimuli invoke a single pattern of dopaminergic neuron spiking; when active, multiple patterns occur, yet the spike timing in each is reliable. In spontaneous multineuron activity from the cortex of anesthetized cat, I show the existence of neural ensembles that evolve in membership and characteristic timescale of organization during global slow oscillations. I validate these findings by showing that the method both is remarkably reliable at detecting known groups and can detect large-scale organization of dynamics in a model of the striatum.

  13. Grid-enabled mammographic auditing and training system

    NASA Astrophysics Data System (ADS)

    Yap, M. H.; Gale, A. G.

    2008-03-01

    Effective use of new technologies to support healthcare initiatives is important and current research is moving towards implementing secure grid-enabled healthcare provision. In the UK, a large-scale collaborative research project (GIMI: Generic Infrastructures for Medical Informatics), which is concerned with the development of a secure IT infrastructure to support very widespread medical research across the country, is underway. In the UK, there are some 109 breast screening centers and a growing number of individuals (circa 650) nationally performing approximately 1.5 million screening examinations per year. At the same, there is a serious, and ongoing, national workforce issue in screening which has seen a loss of consultant mammographers and a growth in specially trained technologists and other non-radiologists. Thus there is a need to offer effective and efficient mammographic training so as to maintain high levels of screening skills. Consequently, a grid based system has been proposed which has the benefit of offering very large volumes of training cases that the mammographers can access anytime and anywhere. A database, spread geographically across three university systems, of screening cases is used as a test set of known cases. The GIMI mammography training system first audits these cases to ensure that they are appropriately described and annotated. Subsequently, the cases are utilized for training in a grid-based system which has been developed. This paper briefly reviews the background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations, and future plans of such a grid based approach.

  14. A Semisupervised Support Vector Machines Algorithm for BCI Systems

    PubMed Central

    Qin, Jianzhao; Li, Yuanqing; Sun, Wei

    2007-01-01

    As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141

  15. Air pollution source identification

    NASA Technical Reports Server (NTRS)

    Fordyce, J. S.

    1975-01-01

    The techniques available for source identification are reviewed: remote sensing, injected tracers, and pollutants themselves as tracers. The use of the large number of trace elements in the ambient airborne particulate matter as a practical means of identifying sources is discussed. Trace constituents are determined by sensitive, inexpensive, nondestructive, multielement analytical methods such as instrumental neutron activation and charged particle X-ray fluorescence. The application to a large data set of pairwise correlation, the more advanced pattern recognition-cluster analysis approach with and without training sets, enrichment factors, and pollutant concentration rose displays for each element is described. It is shown that elemental constituents are related to specific source types: earth crustal, automotive, metallurgical, and more specific industries. A field-ready source identification system based on time and wind direction resolved sampling is described.

  16. Young workers in the construction industry and initial OSH-training when entering work life.

    PubMed

    Holte, Kari Anne; Kjestveit, Kari

    2012-01-01

    Studies have found that young workers are at risk for injuries. The risk for accidents is high within construction, indicating that young workers may be especially vulnerable in this industry. In Norway, it is possible to enter the construction industry as a full time worker at the age of 18. The aim of this paper was to explore how young construction workers are received at their workplace with regards to OHS-training. The study was designed as a qualitative case study. Each case consisted of a young worker or apprentice (< 25 years), a colleague, the immediate superior, the OHS manager, and a safety representative in the company. The interviews were recorded and analyzed through content analysis. The results showed that there were differences between large and small companies, where large companies had more formalized routines and systems for receiving and training young workers. These routines were however more dependent on requirements set by legislators and contractors more than by company size, since the legislation has different requirements with impact on OHS.

  17. Supervised Detection of Anomalous Light Curves in Massive Astronomical Catalogs

    NASA Astrophysics Data System (ADS)

    Nun, Isadora; Pichara, Karim; Protopapas, Pavlos; Kim, Dae-Won

    2014-09-01

    The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nun, Isadora; Pichara, Karim; Protopapas, Pavlos

    The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each ofmore » the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.« less

  19. Challenges and opportunities in building a sustainable rural primary care workforce in alignment with the Affordable Care Act: the WWAMI program as a case study.

    PubMed

    Allen, Suzanne M; Ballweg, Ruth A; Cosgrove, Ellen M; Engle, Kellie A; Robinson, Lawrence R; Rosenblatt, Roger A; Skillman, Susan M; Wenrich, Marjorie D

    2013-12-01

    The authors examine the potential impact of the Patient Protection and Affordable Care Act (ACA) on a large medical education program in the Northwest United States that builds the primary care workforce for its largely rural region. The 42-year-old Washington, Wyoming, Alaska, Montana, and Idaho (WWAMI) program, hosted by the University of Washington School of Medicine, is one of the nation's most successful models for rural health training. The program has expanded training and retention of primary care health professionals for the region through medical school education, graduate medical education, a physician assistant training program, and support for practicing health professionals.The ACA and resulting accountable care organizations (ACOs) present potential challenges for rural settings and health training programs like WWAMI that focus on building the health workforce for rural and underserved populations. As more Americans acquire health coverage, more health professionals will be needed, especially in primary care. Rural locations may face increased competition for these professionals. Medical schools are expanding their positions to meet the need, but limits on graduate medical education expansion may result in a bottleneck, with insufficient residency positions for graduating students. The development of ACOs may further challenge building a rural workforce by limiting training opportunities for health professionals because of competing demands and concerns about cost, efficiency, and safety associated with training. Medical education programs like WWAMI will need to increase efforts to train primary care physicians and increase their advocacy for student programs and additional graduate medical education for rural constituents.

  20. Brain structural changes following adaptive cognitive training assessed by Tensor-Based Morphometry (TBM)

    PubMed Central

    Colom, Roberto; Hua, Xue; Martínez, Kenia; Burgaleta, Miguel; Román, Francisco J.; Gunter, Jeffrey L.; Carmona, Susanna; Jaeggi, Susanne M.; Thompson, Paul M.

    2016-01-01

    Tensor-Based Morphometry (TBM) allows the automatic mapping of brain changes across time building 3D deformation maps. This technique has been applied for tracking brain degeneration in Alzheimer's and other neurodegenerative diseases with high sensitivity and reliability. Here we applied TBM to quantify changes in brain structure after completing a challenging adaptive cognitive training program based on the n-back task. Twenty-six young women completed twenty-four training sessions across twelve weeks and they showed, on average, large cognitive improvements. High-resolution MRI scans were obtained before and after training. The computed longitudinal deformation maps were analyzed for answering three questions: (a) Are there differential brain structural changes in the training group as compared with a matched control group? (b) Are these changes related to performance differences in the training program? (c) Are standardized changes in a set of psychological factors (fluid and crystallized intelligence, working memory, and attention control) measured before and after training, related to structural changes in the brain? Results showed (a) greater structural changes for the training group in the temporal lobe, (b) a negative correlation between these changes and performance across training sessions (the greater the structural change, the lower the cognitive performance improvements), and (c) negligible effects regarding the psychological factors measured before and after training. PMID:27477628

  1. Single- vs. Multiple-Set Strength Training in Women.

    ERIC Educational Resources Information Center

    Schlumberger, Andreas; Stec, Justyna; Schmidtbleicher, Dietmar

    2001-01-01

    Compared the effects of single- and multiple-set strength training in women with basic experience in resistance training. Both training groups had significant strength improvements in leg extension. In the seated bench press, only the three-set group showed a significant increase in maximal strength. There were higher strength gains overall in the…

  2. Handwritten word preprocessing for database adaptation

    NASA Astrophysics Data System (ADS)

    Oprean, Cristina; Likforman-Sulem, Laurence; Mokbel, Chafic

    2013-01-01

    Handwriting recognition systems are typically trained using publicly available databases, where data have been collected in controlled conditions (image resolution, paper background, noise level,...). Since this is not often the case in real-world scenarios, classification performance can be affected when novel data is presented to the word recognition system. To overcome this problem, we present in this paper a new approach called database adaptation. It consists of processing one set (training or test) in order to adapt it to the other set (test or training, respectively). Specifically, two kinds of preprocessing, namely stroke thickness normalization and pixel intensity normalization are considered. The advantage of such approach is that we can re-use the existing recognition system trained on controlled data. We conduct several experiments with the Rimes 2011 word database and with a real-world database. We adapt either the test set or the training set. Results show that training set adaptation achieves better results than test set adaptation, at the cost of a second training stage on the adapted data. Accuracy of data set adaptation is increased by 2% to 3% in absolute value over no adaptation.

  3. A deep learning and novelty detection framework for rapid phenotyping in high-content screening

    PubMed Central

    Sommer, Christoph; Hoefler, Rudolf; Samwer, Matthias; Gerlich, Daniel W.

    2017-01-01

    Supervised machine learning is a powerful and widely used method for analyzing high-content screening data. Despite its accuracy, efficiency, and versatility, supervised machine learning has drawbacks, most notably its dependence on a priori knowledge of expected phenotypes and time-consuming classifier training. We provide a solution to these limitations with CellCognition Explorer, a generic novelty detection and deep learning framework. Application to several large-scale screening data sets on nuclear and mitotic cell morphologies demonstrates that CellCognition Explorer enables discovery of rare phenotypes without user training, which has broad implications for improved assay development in high-content screening. PMID:28954863

  4. Collaboration in a competitive healthcare system: negotiation 101 for clinicians.

    PubMed

    Clay-Williams, Robyn; Johnson, Andrew; Lane, Paul; Li, Zhicheng; Camilleri, Lauren; Winata, Teresa; Klug, Michael

    2018-04-09

    Purpose The purpose of this paper is to evaluate the effectiveness of negotiation training delivered to senior clinicians, managers and executives, by exploring whether staff members implemented negotiation skills in their workplace following the training, and if so, how and when. Design/methodology/approach This is a qualitative study involving face-to-face interviews with 18 senior clinicians, managers and executives who completed a two-day intensive negotiation skills training course. Interviews were transcribed verbatim, and inductive interpretive analysis techniques were used to identify common themes. Research setting was a large tertiary care hospital and health service in regional Australia. Findings Participants generally reported positive affective and utility reactions to the training, and attempted to implement at least some of the skills in the workplace. The main enabler was provision of a Negotiation Toolkit to assist in preparing and conducting negotiations. The main barrier was lack of time to reflect on the principles and prepare for upcoming negotiations. Participants reported that ongoing skill development and retention were not adequately addressed; suggestions for improving sustainability included provision of refresher training and mentoring. Research limitations/implications Limitations include self-reported data, and interview questions positively elicited examples of training translation. Practical implications The training was well matched to participant needs, with negotiation a common and daily activity for most healthcare professionals. Implementation of the skills showed potential for improving collaboration and problem solving in the workplace. Practical examples of how the skills were used in the workplace are provided. Originality/value To the authors' knowledge, this is the first international study aimed at evaluating the effectiveness of an integrative bargaining negotiation training program targeting executives, senior clinicians and management staff in a large healthcare organization.

  5. Thinking Outside of Outpatient: Underutilized Settings for Psychotherapy Education.

    PubMed

    Blumenshine, Philip; Lenet, Alison E; Havel, Lauren K; Arbuckle, Melissa R; Cabaniss, Deborah L

    2017-02-01

    Although psychiatry residents are expected to achieve competency in conducting psychotherapy during their training, it is unclear how psychotherapy teaching is integrated across diverse clinical settings. Between January and March 2015, 177 psychiatry residency training directors were sent a survey asking about psychotherapy training practices in their programs, as well as perceived barriers to psychotherapy teaching. Eighty-two training directors (44%) completed the survey. While 95% indicated that psychotherapy was a formal learning objective for outpatient clinic rotations, fifty percent or fewer noted psychotherapy was a learning objective in other settings. Most program directors would like to see psychotherapy training included (particularly supportive psychotherapy and cognitive behavioral therapy) on inpatient (82%) and consultation-liaison settings (57%). The most common barriers identified to teaching psychotherapy in these settings were time and perceived inadequate staff training and interest. Non-outpatient rotations appear to be an underutilized setting for psychotherapy teaching.

  6. Systematic Review of Voluntary Participation in Simulation-Based Laparoscopic Skills Training: Motivators and Barriers for Surgical Trainee Attendance.

    PubMed

    Gostlow, Hannah; Marlow, Nicholas; Babidge, Wendy; Maddern, Guy

    To examine and report on evidence relating to surgical trainees' voluntary participation in simulation-based laparoscopic skills training. Specifically, the underlying motivators, enablers, and barriers faced by surgical trainees with regard to attending training sessions on a regular basis. A systematic search of the literature (PubMed; CINAHL; EMBASE; Cochrane Collaboration) was conducted between May and July 2015. Studies were included on whether they reported on surgical trainee attendance at voluntary, simulation-based laparoscopic skills training sessions, in addition to qualitative data regarding participant's perceived barriers and motivators influencing their decision to attend such training. Factors affecting a trainee's motivation were categorized as either intrinsic (internal) or extrinsic (external). Two randomised control trials and 7 case series' met our inclusion criteria. Included studies were small and generally poor quality. Overall, voluntary simulation-based laparoscopic skills training was not well attended. Intrinsic motivators included clearly defined personal performance goals and relevance to clinical practice. Extrinsic motivators included clinical responsibilities and available free time, simulator location close to clinical training, and setting obligatory assessments or mandated training sessions. The effect of each of these factors was variable, and largely dependent on the individual trainee. The greatest reported barrier to attending voluntary training was the lack of available free time. Although data quality is limited, it can be seen that providing unrestricted access to simulator equipment is not effective in motivating surgical trainees to voluntarily participate in simulation-based laparoscopic skills training. To successfully encourage participation, consideration needs to be given to the factors influencing motivation to attend training. Further research, including better designed randomised control trials and large-scale surveys, is required to provide more definitive answers to the degree in which various incentives influence trainees' motivations and actual attendance rates. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  7. Seismic waveform inversion using neural networks

    NASA Astrophysics Data System (ADS)

    De Wit, R. W.; Trampert, J.

    2012-12-01

    Full waveform tomography aims to extract all available information on Earth structure and seismic sources from seismograms. The strongly non-linear nature of this inverse problem is often addressed through simplifying assumptions for the physical theory or data selection, thus potentially neglecting valuable information. Furthermore, the assessment of the quality of the inferred model is often lacking. This calls for the development of methods that fully appreciate the non-linear nature of the inverse problem, whilst providing a quantification of the uncertainties in the final model. We propose to invert seismic waveforms in a fully non-linear way by using artificial neural networks. Neural networks can be viewed as powerful and flexible non-linear filters. They are very common in speech, handwriting and pattern recognition. Mixture Density Networks (MDN) allow us to obtain marginal posterior probability density functions (pdfs) of all model parameters, conditioned on the data. An MDN can approximate an arbitrary conditional pdf as a linear combination of Gaussian kernels. Seismograms serve as input, Earth structure parameters are the so-called targets and network training aims to learn the relationship between input and targets. The network is trained on a large synthetic data set, which we construct by drawing many random Earth models from a prior model pdf and solving the forward problem for each of these models, thus generating synthetic seismograms. As a first step, we aim to construct a 1D Earth model. Training sets are constructed using the Mineos package, which computes synthetic seismograms in a spherically symmetric non-rotating Earth by summing normal modes. We train a network on the body waveforms present in these seismograms. Once the network has been trained, it can be presented with new unseen input data, in our case the body waves in real seismograms. We thus obtain the posterior pdf which represents our final state of knowledge given the information in the training set and the real data.

  8. Large Scale System Defense

    DTIC Science & Technology

    2008-10-01

    AD); Aeolos, a distributed intrusion detection and event correlation infrastructure; STAND, a training-set sanitization technique applicable to ADs...UU 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON Frank H. Born a. REPORT U b. ABSTRACT U c . THIS PAGE U 19b. TELEPHONE...Summary of findings 2 (a) Automatic Patch Generation 2 (b) Better Patch Management 2 ( c ) Artificial Diversity 3 (d) Distributed Anomaly Detection 3

  9. Establishing Fire Safety Skills Using Behavioral Skills Training

    ERIC Educational Resources Information Center

    Houvouras, Andrew J., IV; Harvey, Mark T.

    2014-01-01

    The use of behavioral skills training (BST) to educate 3 adolescent boys on the risks of lighters and fire setting was evaluated using in situ assessment in a school setting. Two participants had a history of fire setting. After training, all participants adhered to established rules: (a) avoid a deactivated lighter, (b) leave the training area,…

  10. A practical model for the train-set utilization: The case of Beijing-Tianjin passenger dedicated line in China

    PubMed Central

    Li, Xiaomeng; Yang, Zhuo

    2017-01-01

    As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved. PMID:28489933

  11. Comprehensive simulation-enhanced training curriculum for an advanced minimally invasive procedure: a randomized controlled trial.

    PubMed

    Zevin, Boris; Dedy, Nicolas J; Bonrath, Esther M; Grantcharov, Teodor P

    2017-05-01

    There is no comprehensive simulation-enhanced training curriculum to address cognitive, psychomotor, and nontechnical skills for an advanced minimally invasive procedure. 1) To develop and provide evidence of validity for a comprehensive simulation-enhanced training (SET) curriculum for an advanced minimally invasive procedure; (2) to demonstrate transfer of acquired psychomotor skills from a simulation laboratory to live porcine model; and (3) to compare training outcomes of SET curriculum group and chief resident group. University. This prospective single-blinded, randomized, controlled trial allocated 20 intermediate-level surgery residents to receive either conventional training (control) or SET curriculum training (intervention). The SET curriculum consisted of cognitive, psychomotor, and nontechnical training modules. Psychomotor skills in a live anesthetized porcine model in the OR was the primary outcome. Knowledge of advanced minimally invasive and bariatric surgery and nontechnical skills in a simulated OR crisis scenario were the secondary outcomes. Residents in the SET curriculum group went on to perform a laparoscopic jejunojejunostomy in the OR. Cognitive, psychomotor, and nontechnical skills of SET curriculum group were also compared to a group of 12 chief surgery residents. SET curriculum group demonstrated superior psychomotor skills in a live porcine model (56 [47-62] versus 44 [38-53], P<.05) and superior nontechnical skills (41 [38-45] versus 31 [24-40], P<.01) compared with conventional training group. SET curriculum group and conventional training group demonstrated equivalent knowledge (14 [12-15] versus 13 [11-15], P = 0.47). SET curriculum group demonstrated equivalent psychomotor skills in the live porcine model and in the OR in a human patient (56 [47-62] versus 63 [61-68]; P = .21). SET curriculum group demonstrated inferior knowledge (13 [11-15] versus 16 [14-16]; P<.05), equivalent psychomotor skill (63 [61-68] versus 68 [62-74]; P = .50), and superior nontechnical skills (41 [38-45] versus 34 [27-35], P<.01) compared with chief resident group. Completion of the SET curriculum resulted in superior training outcomes, compared with conventional surgery training. Implementation of the SET curriculum can standardize training for an advanced minimally invasive procedure and can ensure that comprehensive proficiency milestones are met before exposure to patient care. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  12. Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation

    DTIC Science & Technology

    2010-01-01

    classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses

  13. Computer-enhanced laparoscopic training system (CELTS): bridging the gap.

    PubMed

    Stylopoulos, N; Cotin, S; Maithel, S K; Ottensmeye, M; Jackson, P G; Bardsley, R S; Neumann, P F; Rattner, D W; Dawson, S L

    2004-05-01

    There is a large and growing gap between the need for better surgical training methodologies and the systems currently available for such training. In an effort to bridge this gap and overcome the disadvantages of the training simulators now in use, we developed the Computer-Enhanced Laparoscopic Training System (CELTS). CELTS is a computer-based system capable of tracking the motion of laparoscopic instruments and providing feedback about performance in real time. CELTS consists of a mechanical interface, a customizable set of tasks, and an Internet-based software interface. The special cognitive and psychomotor skills a laparoscopic surgeon should master were explicitly defined and transformed into quantitative metrics based on kinematics analysis theory. A single global standardized and task-independent scoring system utilizing a z-score statistic was developed. Validation exercises were performed. The scoring system clearly revealed a gap between experts and trainees, irrespective of the task performed; none of the trainees obtained a score above the threshold that distinguishes the two groups. Moreover, CELTS provided educational feedback by identifying the key factors that contributed to the overall score. Among the defined metrics, depth perception, smoothness of motion, instrument orientation, and the outcome of the task are major indicators of performance and key parameters that distinguish experts from trainees. Time and path length alone, which are the most commonly used metrics in currently available systems, are not considered good indicators of performance. CELTS is a novel and standardized skills trainer that combines the advantages of computer simulation with the features of the traditional and popular training boxes. CELTS can easily be used with a wide array of tasks and ensures comparability across different training conditions. This report further shows that a set of appropriate and clinically relevant performance metrics can be defined and a standardized scoring system can be designed.

  14. Reading the lesson: eliciting requirements for a mammography training application

    NASA Astrophysics Data System (ADS)

    Hartswood, M.; Blot, L.; Taylor, P.; Anderson, S.; Procter, R.; Wilkinson, L.; Smart, L.

    2009-02-01

    Demonstrations of a prototype training tool were used to elicit requirements for an intelligent training system for screening mammography. The prototype allowed senior radiologists (mentors) to select cases from a distributed database of images to meet the specific training requirements of junior colleagues (trainees) and then provided automated feedback in response to trainees' attempts at interpretation. The tool was demonstrated to radiologists and radiographers working in the breast screening service at four evaluation sessions. Participants highlighted ease of selecting cases that can deliver specific learning objectives as important for delivering effective training. To usefully structure a large data set of training images we undertook a classification exercise of mentor authored free text 'learning points' attached to training case obtained from two screening centres (n=333, n=129 respectively). We were able to adduce a hierarchy of abstract categories representing classes of lesson that groups of cases were intended to convey (e.g. Temporal change, Misleading juxtapositions, Position of lesion, Typical/Atypical presentation, and so on). In this paper we present the method used to devise this classification, the classification scheme itself, initial user-feedback, and our plans to incorporated it into a software tool to aid case selection.

  15. Layperson training for cardiopulmonary resuscitation: when less is better.

    PubMed

    Roppolo, Lynn P; Saunders, Timothy; Pepe, Paul E; Idris, Ahamed H

    2007-06-01

    Basic cardiopulmonary resuscitation, including use of automated external defibrillators, unequivocally saves lives. However, even when motivated, those wishing to acquire training traditionally have faced a myriad of barriers including the typical time commitment (3-4 h) and the number of certified instructors and equipment caches required. The recent introduction of innovative video-based self-instruction, utilizing individualized inflatable manikins, provides an important breakthrough in cardiopulmonary-resuscitation training. Definitive studies now show that many dozens of persons can be trained simultaneously to perform basic cardiopulmonary resuscitation, including appropriate use of an automated external defibrillator, in less than 30 min. Such training not only requires much less labor intensity and avoids the need for multiple certified instructors, but also, because it is largely focused on longer and more repetitious performance of skills, these life-saving lessons can be retained for long periods of time. Simpler to set-up and implement, the half-hour video-based self-instruction makes it easier for employers, churches, civic groups, school systems and at-risk persons at home to implement such training and it will likely facilitate more frequent re-training. It is now hoped that the ultimate benefit will be more lives saved in communities worldwide.

  16. 3D multimodal MRI brain glioma tumor and edema segmentation: a graph cut distribution matching approach.

    PubMed

    Njeh, Ines; Sallemi, Lamia; Ayed, Ismail Ben; Chtourou, Khalil; Lehericy, Stephane; Galanaud, Damien; Hamida, Ahmed Ben

    2015-03-01

    This study investigates a fast distribution-matching, data-driven algorithm for 3D multimodal MRI brain glioma tumor and edema segmentation in different modalities. We learn non-parametric model distributions which characterize the normal regions in the current data. Then, we state our segmentation problems as the optimization of several cost functions of the same form, each containing two terms: (i) a distribution matching prior, which evaluates a global similarity between distributions, and (ii) a smoothness prior to avoid the occurrence of small, isolated regions in the solution. Obtained following recent bound-relaxation results, the optima of the cost functions yield the complement of the tumor region or edema region in nearly real-time. Based on global rather than pixel wise information, the proposed algorithm does not require an external learning from a large, manually-segmented training set, as is the case of the existing methods. Therefore, the ensuing results are independent of the choice of a training set. Quantitative evaluations over the publicly available training and testing data set from the MICCAI multimodal brain tumor segmentation challenge (BraTS 2012) demonstrated that our algorithm yields a highly competitive performance for complete edema and tumor segmentation, among nine existing competing methods, with an interesting computing execution time (less than 0.5s per image). Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. The 2-degree Field Lensing Survey: photometric redshifts from a large new training sample to r < 19.5

    NASA Astrophysics Data System (ADS)

    Wolf, C.; Johnson, A. S.; Bilicki, M.; Blake, C.; Amon, A.; Erben, T.; Glazebrook, K.; Heymans, C.; Hildebrandt, H.; Joudaki, S.; Klaes, D.; Kuijken, K.; Lidman, C.; Marin, F.; Parkinson, D.; Poole, G.

    2017-04-01

    We present a new training set for estimating empirical photometric redshifts of galaxies, which was created as part of the 2-degree Field Lensing Survey project. This training set is located in a ˜700 deg2 area of the Kilo-Degree-Survey South field and is randomly selected and nearly complete at r < 19.5. We investigate the photometric redshift performance obtained with ugriz photometry from VST-ATLAS and W1/W2 from WISE, based on several empirical and template methods. The best redshift errors are obtained with kernel-density estimation (KDE), as are the lowest biases, which are consistent with zero within statistical noise. The 68th percentiles of the redshift scatter for magnitude-limited samples at r < (15.5, 17.5, 19.5) are (0.014, 0.017, 0.028). In this magnitude range, there are no known ambiguities in the colour-redshift map, consistent with a small rate of redshift outliers. In the fainter regime, the KDE method produces p(z) estimates per galaxy that represent unbiased and accurate redshift frequency expectations. The p(z) sum over any subsample is consistent with the true redshift frequency plus Poisson noise. Further improvements in redshift precision at r < 20 would mostly be expected from filter sets with narrower passbands to increase the sensitivity of colours to small changes in redshift.

  18. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  19. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  20. Smartphone-Based System for Learning and Inferring Hearing Aid Settings.

    PubMed

    Aldaz, Gabriel; Puria, Sunil; Leifer, Larry J

    2016-10-01

    Previous research has shown that hearing aid wearers can successfully self-train their instruments' gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the "untrained system," that is, the manufacturer's algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The "trained system" first learned each individual's preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. An experimental within-participants study. Participants used a prototype hearing system-comprising two hearing aids, Android smartphone, and body-worn gateway device-for ∼6 weeks. Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Participants were fitted and instructed to perform daily comparisons of settings ("listening evaluations") through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone-including environmental sound classification, sound level, and location-to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system ("trained settings") to those suggested by the hearing aids' untrained system ("untrained settings"). We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone. American Academy of Audiology

  1. How well does multiple OCR error correction generalize?

    NASA Astrophysics Data System (ADS)

    Lund, William B.; Ringger, Eric K.; Walker, Daniel D.

    2013-12-01

    As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.

  2. Women in medicine: a four-nation comparison.

    PubMed

    McMurray, Julia E; Cohen, May; Angus, Graham; Harding, John; Gavel, Paul; Horvath, John; Paice, Elisabeth; Schmittdiel, Julie; Grumbach, Kevin

    2002-01-01

    to determine the impact of increasing numbers of women in medicine on the physician work force in Australia, Canada, England, and the United States. We collected data on physician work force issues from professional organizations and government agencies in each of the 4 nations. Women now make up nearly half of all medical students in all 4 countries and 20% to 30% of all practicing physicians. Most are concentrated in primary care specialties and obstetrics/gynecology and are underrepresented in surgical training programs. Women physicians practice largely in urban settings and work 7 to 11 fewer hours per week than men do, for lower pay. Twenty percent to 50% of women primary care physicians are in part-time practice. Work force planners should anticipate larger decreases in physician full-time equivalencies than previously expected because of the increased number of women in practice and their tendency to work fewer hours and to be in part-time practice, especially in primary care. Responses to these changes vary among the 4 countries. Canada has developed a detailed database of work/family issues; England has pioneered flexible training schemes and reentry training programs; and Australia has joined consumers, physicians, and educators in improving training opportunities and the work climate for women. Improved access to surgical and subspecialty fields, training and practice settings that provide balance for work/family issues, and improved recruitment and retention of women physicians in rural areas will increase the contributions of women physicians.

  3. Training forward surgical teams for deployment: the US Army Trauma Training Center.

    PubMed

    Valdiri, Linda A; Andrews-Arce, Virginia E; Seery, Jason M

    2015-04-01

    Since the late 1980s, the US Army has been deploying forward surgical teams to the most intense areas of conflict to care for personnel injured in combat. The forward surgical team is a 20-person medical team that is highly mobile, extremely agile, and has relatively little need of outside support to perform its surgical mission. In order to perform this mission, however, team training and trauma training are required. The large majority of these teams do not routinely train together to provide patient care, and that training currently takes place at the US Army Trauma Training Center (ATTC). The training staff of the ATTC is a specially selected 10-person team made up of active duty personnel from the Army Medical Department assigned to the University of Miami/Jackson Memorial Hospital Ryder Trauma Center in Miami, Florida. The ATTC team of instructors trains as many as 11 forward surgical teams in 2-week rotations per year so that the teams are ready to perform their mission in a deployed setting. Since the first forward surgical team was trained at the ATTC in January 2002, more than 112 forward surgical teams and other similar-sized Department of Defense forward resuscitative and surgical units have rotated through trauma training at the Ryder Trauma Center in preparation for deployment overseas. ©2015 American Association of Critical-Care Nurses.

  4. Dissociable effects of game elements on motivation and cognition in a task-switching training in middle childhood

    PubMed Central

    Dörrenbächer, Sandra; Müller, Philipp M.; Tröger, Johannes; Kray, Jutta

    2014-01-01

    Although motivational reinforcers are often used to enhance the attractiveness of trainings of cognitive control in children, little is known about how such motivational manipulations of the setting contribute to separate gains in motivation and cognitive-control performance. Here we provide a framework for systematically investigating the impact of a motivational video-game setting on the training motivation, the task performance, and the transfer success in a task-switching training in middle-aged children (8–11 years of age). We manipulated both the type of training (low-demanding/single-task training vs. high-demanding/task-switching training) as well as the motivational setting (low-motivational/without video-game elements vs. high-motivational/with video-game elements) separately from another. The results indicated that the addition of game elements to a training setting enhanced the intrinsic interest in task practice, independently of the cognitive demands placed by the training type. In the task-switching group, the high-motivational training setting led to an additional enhancement of task and switching performance during the training phase right from the outset. These motivation-induced benefits projected onto the switching performance in a switching situation different from the trained one (near-transfer measurement). However, in structurally dissimilar cognitive tasks (far-transfer measurement), the motivational gains only transferred to the response dynamics (speed of processing). Hence, the motivational setting clearly had a positive impact on the training motivation and on the paradigm-specific task-switching abilities; it did not, however, consistently generalize on broad cognitive processes. These findings shed new light on the conflation of motivation and cognition in childhood and may help to refine guidelines for designing adequate training interventions. PMID:25431564

  5. Identification of consensus biomarkers for predicting non-genotoxic hepatocarcinogens

    PubMed Central

    Huang, Shan-Han; Tung, Chun-Wei

    2017-01-01

    The assessment of non-genotoxic hepatocarcinogens (NGHCs) is currently relying on two-year rodent bioassays. Toxicogenomics biomarkers provide a potential alternative method for the prioritization of NGHCs that could be useful for risk assessment. However, previous studies using inconsistently classified chemicals as the training set and a single microarray dataset concluded no consensus biomarkers. In this study, 4 consensus biomarkers of A2m, Ca3, Cxcl1, and Cyp8b1 were identified from four large-scale microarray datasets of the one-day single maximum tolerated dose and a large set of chemicals without inconsistent classifications. Machine learning techniques were subsequently applied to develop prediction models for NGHCs. The final bagging decision tree models were constructed with an average AUC performance of 0.803 for an independent test. A set of 16 chemicals with controversial classifications were reclassified according to the consensus biomarkers. The developed prediction models and identified consensus biomarkers are expected to be potential alternative methods for prioritization of NGHCs for further experimental validation. PMID:28117354

  6. Rapid and Accurate Machine Learning Recognition of High Performing Metal Organic Frameworks for CO2 Capture.

    PubMed

    Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K

    2014-09-04

    In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.

  7. Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.

    PubMed

    Nath, Abhigyan; Subbiah, Karthikeyan

    2015-12-01

    Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance than that of individual base classifiers. The performance of the learned models trained on Kmeans preprocessed training set is far better than the randomly generated training sets. The proposed method achieved a sensitivity of 90.6%, specificity of 91.4% and accuracy of 91.0% on the first test set and sensitivity of 92.9%, specificity of 96.2% and accuracy of 94.7% on the second blind test set. These results have established that diversifying training set improves the performance of predictive models through superior generalization ability and balancing the training set improves prediction accuracy. For smaller data sets, unsupervised Kmeans based sampling can be an effective technique to increase generalization than that of the usual random splitting method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. The UXO Classification Demonstration at San Luis Obispo, CA

    DTIC Science & Technology

    2010-09-01

    Set ................................45  2.17.2  Active Learning Training and Test Set ..........................................47  2.17.3  Extended...optimized algorithm by applying it to only the unlabeled data in the test set. 2.17.2 Active Learning Training and Test Set SIG also used active ... learning [12]. Active learning , an alternative approach for constructing a training set, is used in conjunction with either supervised or semi

  9. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  10. Teaching Health Center Graduate Medical Education Locations Predominantly Located in Federally Designated Underserved Areas.

    PubMed

    Barclift, Songhai C; Brown, Elizabeth J; Finnegan, Sean C; Cohen, Elena R; Klink, Kathleen

    2016-05-01

    Background The Teaching Health Center Graduate Medical Education (THCGME) program is an Affordable Care Act funding initiative designed to expand primary care residency training in community-based ambulatory settings. Statute suggests, but does not require, training in underserved settings. Residents who train in underserved settings are more likely to go on to practice in similar settings, and graduates more often than not practice near where they have trained. Objective The objective of this study was to describe and quantify federally designated clinical continuity training sites of the THCGME program. Methods Geographic locations of the training sites were collected and characterized as Health Professional Shortage Area, Medically Underserved Area, Population, or rural areas, and were compared with the distribution of Centers for Medicare and Medicaid Services (CMS)-funded training positions. Results More than half of the teaching health centers (57%) are located in states that are in the 4 quintiles with the lowest CMS-funded resident-to-population ratio. Of the 109 training sites identified, more than 70% are located in federally designated high-need areas. Conclusions The THCGME program is a model that funds residency training in community-based ambulatory settings. Statute suggests, but does not explicitly require, that training take place in underserved settings. Because the majority of the 109 clinical training sites of the 60 funded programs in 2014-2015 are located in federally designated underserved locations, the THCGME program deserves further study as a model to improve primary care distribution into high-need communities.

  11. Practice, training, and research in neuropsychology in mainland China: challenges and opportunities.

    PubMed

    Chan, Raymond C K; Wang, Ya; Wang, Yi; Cheung, Eric F C

    2016-11-01

    This is an invited paper for a special issue. The objective was to review history, educational and training pathways, licensure and board certification, practice and compensation, and unique aspects of, or challenges faced by, neuropsychology in mainland China. Historical, scientific, and clinical literatures were reviewed and integrated. The history of neuropsychology in mainland China is traced back to the late 1930s. Educational pathways have not yet been fully formalized. Clinical practice generally occurs within rehabilitation settings, and medical license is required. The main challenge lies in the establishment of training guidelines and the expansion of neuropsychology to meet the tremendous needs of a large nation. Although the development and status of psychology has gradually gained momentum in mainland China, the development of neuropsychology has not shown significant advancement since the late 1930s.

  12. The Training and Field Work Experiences of Community Health Workers conducting non-invasive, population-based screening for Cardiovascular Disease in Four Communities in Low and Middle-Income Settings

    PubMed Central

    Denman, Catalina A.; Montano, Carlos Mendoza; Gaziano, Thomas A.; Levitt, Naomi; Rivera-Andrade, Alvaro; Carrasco, Diana Munguía; Zulu, Jabu; Khanam, Masuma Akter; Puoane, Thandi

    2015-01-01

    Background Cardiovascular disease (CVD) is on the rise in low- and middle-income countries (LMIC) and is proving difficult to combat due to the emphasis on improving outcomes in maternal and child health and infectious diseases, against a backdrop of severe human resource and infrastructure constraints. Effective task-sharing from physicians or nurses to community health workers (CHWs) to conduct population-based screening for persons at risk, has the potential to mitigate the impact of CVD on vulnerable populations. CHWs in Bangladesh, Guatemala, Mexico, and South Africa were trained to conduct non-invasive population-based screening for persons at high risk for CVD. Objective (s) The objectives of this study were to quantitatively assess the performance of CHWs during training and to qualitatively capture their training and fieldwork experiences while conducting non-invasive screening for cardiovascular disease (CVD) risk in their communities. Methods Written tests were used to assess CHWs’ acquisition of content knowledge during training, and focus group discussions conducted to capture their training and fieldwork experiences. Results Training was effective at increasing the CHWs’ content knowledge of cardiovascular disease (CVD) and this knowledge was largely retained up to six months after the completion of field work. Common themes which need to be addressed when designing task sharing with CHWs in chronic diseases are identified, including language, respect, and compensation. The importance of having intimate knowledge of the community receiving services from design to implementation is underscored. Conclusions Effective training for screening for CVD in community settings should have a strong didactic core that is supplemented with culture-specific adaptations in the delivery of instruction. The incorporation of expert and intimate knowledge of the communities themselves is critical, from the design to implementation phases of training. Challenges such as role definition, defining career paths, and providing adequate remuneration, must be addressed. PMID:25754566

  13. Classification of large-sized hyperspectral imagery using fast machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira

    2017-07-01

    We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.

  14. Pneumothorax detection in chest radiographs using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Aviel; Konen, Eli; Greenspan, Hayit

    2018-02-01

    This study presents a computer assisted diagnosis system for the detection of pneumothorax (PTX) in chest radiographs based on a convolutional neural network (CNN) for pixel classification. Using a pixel classification approach allows utilization of the texture information in the local environment of each pixel while training a CNN model on millions of training patches extracted from a relatively small dataset. The proposed system uses a pre-processing step of lung field segmentation to overcome the large variability in the input images coming from a variety of imaging sources and protocols. Using a CNN classification, suspected pixel candidates are extracted within each lung segment. A postprocessing step follows to remove non-physiological suspected regions and noisy connected components. The overall percentage of suspected PTX area was used as a robust global decision for the presence of PTX in each lung. The system was trained on a set of 117 chest x-ray images with ground truth segmentations of the PTX regions. The system was tested on a set of 86 images and reached diagnosis accuracy of AUC=0.95. Overall preliminary results are promising and indicate the growing ability of CAD based systems to detect findings in medical imaging on a clinical level accuracy.

  15. Clinicians’ Perspectives on Cognitive Therapy in Community Mental Health Settings: Implications for Training and Implementation

    PubMed Central

    Gutiérrez-Colina, Ana; Toder, Katherine; Esposito, Gregory; Barg, Frances; Castro, Frank; Beck, Aaron T.; Crits-Christoph, Paul

    2012-01-01

    Policymakers are investing significant resources in large-scale training and implementation programs for evidence-based psychological treatments (EBPTs) in public mental health systems. However, relatively little research has been conducted to understand factors that may influence the success of efforts to implement EBPTs for adult consumers of mental health services. In a formative investigation during the development of a program to implement cognitive therapy (CT) in a community mental health system, we surveyed and interviewed clinicians and clinical administrators to identify potential influences on CT implementation within their agencies. Four primary themes were identified. Two related to attitudes towards CT: (1) ability to address client needs and issues that are perceived as most central to their presenting problems, and (2) reluctance to fully implement CT. Two themes were relevant to context: (1) agency-level barriers, specifically workload and productivity concerns and reactions to change, and (2) agency-level facilitators, specifically, treatment planning requirements and openness to training. These findings provide information that can be used to develop strategies to facilitate the implementation of CT interventions for clients being treated in public-sector settings. PMID:22426739

  16. Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image

    NASA Astrophysics Data System (ADS)

    Pirotti, F.; Sunar, F.; Piragnolo, M.

    2016-06-01

    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random forests method with the highest values; kappa index ranging from 0.55 to 0.42 respectively with the most and least number pixels for training. The two neural networks (multi layered perceptron and its ensemble) and the support vector machines - with default radial basis function kernel - methods follow closely with comparable performance.

  17. Cost analysis of large-scale implementation of the 'Helping Babies Breathe' newborn resuscitation-training program in Tanzania.

    PubMed

    Chaudhury, Sumona; Arlington, Lauren; Brenan, Shelby; Kairuki, Allan Kaijunga; Meda, Amunga Robson; Isangula, Kahabi G; Mponzi, Victor; Bishanga, Dunstan; Thomas, Erica; Msemo, Georgina; Azayo, Mary; Molinier, Alice; Nelson, Brett D

    2016-12-01

    Helping Babies Breathe (HBB) has become the gold standard globally for training birth-attendants in neonatal resuscitation in low-resource settings in efforts to reduce early newborn asphyxia and mortality. The purpose of this study was to do a first-ever activity-based cost-analysis of at-scale HBB program implementation and initial follow-up in a large region of Tanzania and evaluate costs of national scale-up as one component of a multi-method external evaluation of the implementation of HBB at scale in Tanzania. We used activity-based costing to examine budget expense data during the two-month implementation and follow-up of HBB in one of the target regions. Activity-cost centers included administrative, initial training (including resuscitation equipment), and follow-up training expenses. Sensitivity analysis was utilized to project cost scenarios incurred to achieve countrywide expansion of the program across all mainland regions of Tanzania and to model costs of program maintenance over one and five years following initiation. Total costs for the Mbeya Region were $202,240, with the highest proportion due to initial training and equipment (45.2%), followed by central program administration (37.2%), and follow-up visits (17.6%). Within Mbeya, 49 training sessions were undertaken, involving the training of 1,341 health providers from 336 health facilities in eight districts. To similarly expand the HBB program across the 25 regions of mainland Tanzania, the total economic cost is projected to be around $4,000,000 (around $600 per facility). Following sensitivity analyses, the estimated total for all Tanzania initial rollout lies between $2,934,793 to $4,309,595. In order to maintain the program nationally under the current model, it is estimated it would cost $2,019,115 for a further one year and $5,640,794 for a further five years of ongoing program support. HBB implementation is a relatively low-cost intervention with potential for high impact on perinatal mortality in resource-poor settings. It is shown here that nationwide expansion of this program across the range of health provision levels and regions of Tanzania would be feasible. This study provides policymakers and investors with the relevant cost-estimation for national rollout of this potentially neonatal life-saving intervention.

  18. Cognitive remediation in large systems of psychiatric care.

    PubMed

    Medalia, Alice; Saperstein, Alice M; Erlich, Matthew D; Sederer, Lloyd I

    2018-05-02

    IntroductionWith the increasing enthusiasm to provide cognitive remediation (CR) as an evidence-based practice, questions arise as to what is involved in implementing CR in a large system of care. This article describes the first statewide implementation of CR in the USA, with the goal of documenting the implementation issues that care providers are likely to face when bringing CR services to their patients. In 2014, the New York State Office of Mental Health set up a Cognitive Health Service that could be implemented throughout the state-operated system of care. This service was intended to broadly address cognitive health, to assure that the cognitive deficits commonly associated with psychiatric illnesses are recognized and addressed, and that cognitive health is embedded in the vocabulary of wellness. It involved creating a mechanism to train staff to recognize how cognitive health could be prioritized in treatment planning as well as implementing CR in state-operated adult outpatient psychiatry clinics. By 2017, CR was available at clinics serving people with serious mental illness in 13 of 16 adult Psychiatric Centers, located in rural and urban settings throughout New York state. The embedded quality assurance program evaluation tools indicated that CR was acceptable, sustainable, and effective. Cognitive remediation can be feasibly implemented in large systems of care that provide a multilevel system of supports, a training program that educates broadly about cognitive health and specifically about the delivery of CR, and embedded, ongoing program evaluation that is linked to staff supervision.

  19. Saving lives: A meta-analysis of team training in healthcare.

    PubMed

    Hughes, Ashley M; Gregory, Megan E; Joseph, Dana L; Sonesh, Shirley C; Marlow, Shannon L; Lacerenza, Christina N; Benishek, Lauren E; King, Heidi B; Salas, Eduardo

    2016-09-01

    As the nature of work becomes more complex, teams have become necessary to ensure effective functioning within organizations. The healthcare industry is no exception. As such, the prevalence of training interventions designed to optimize teamwork in this industry has increased substantially over the last 10 years (Weaver, Dy, & Rosen, 2014). Using Kirkpatrick's (1956, 1996) training evaluation framework, we conducted a meta-analytic examination of healthcare team training to quantify its effectiveness and understand the conditions under which it is most successful. Results demonstrate that healthcare team training improves each of Kirkpatrick's criteria (reactions, learning, transfer, results; d = .37 to .89). Second, findings indicate that healthcare team training is largely robust to trainee composition, training strategy, and characteristics of the work environment, with the only exception being the reduced effectiveness of team training programs that involve feedback. As a tertiary goal, we proposed and found empirical support for a sequential model of healthcare team training where team training affects results via learning, which leads to transfer, which increases results. We find support for this sequential model in the healthcare industry (i.e., the current meta-analysis) and in training across all industries (i.e., using meta-analytic estimates from Arthur, Bennett, Edens, & Bell, 2003), suggesting the sequential benefits of training are not unique to medical teams. Ultimately, this meta-analysis supports the expanded use of team training and points toward recommendations for optimizing its effectiveness within healthcare settings. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Intensive Behavioral Treatment of Urinary Incontinence of Children with Autism Spectrum Disorders: An Archival Analysis of Procedures and Outcomes from an Outpatient Clinic

    ERIC Educational Resources Information Center

    Hanney, Nicole M.; Jostad, Candice M.; LeBlanc, Linda A.; Carr, James E.; Castile, Allison J.

    2013-01-01

    LeBlanc, Crossett, Bennett, Detweiler, and Carr (2005) described an outpatient model for conducting intensive toilet training with young children with autism using a modified Azrin and Foxx, protocol. In this article, we summarize the use of the protocol in an outpatient setting and the outcomes achieved with a large sample of children with autism…

  1. STACCATO: a novel solution to supernova photometric classification with biased training sets

    NASA Astrophysics Data System (ADS)

    Revsbech, E. A.; Trotta, R.; van Dyk, D. A.

    2018-01-01

    We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. - a diffusion map combined with a random forest classifier - to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.

  2. CMU DeepLens: deep learning for automatic image-based galaxy-galaxy strong lens finding

    NASA Astrophysics Data System (ADS)

    Lanusse, François; Ma, Quanbin; Li, Nan; Collett, Thomas E.; Li, Chun-Liang; Ravanbakhsh, Siamak; Mandelbaum, Rachel; Póczos, Barnabás

    2018-01-01

    Galaxy-scale strong gravitational lensing can not only provide a valuable probe of the dark matter distribution of massive galaxies, but also provide valuable cosmological constraints, either by studying the population of strong lenses or by measuring time delays in lensed quasars. Due to the rarity of galaxy-scale strongly lensed systems, fast and reliable automated lens finding methods will be essential in the era of large surveys such as Large Synoptic Survey Telescope, Euclid and Wide-Field Infrared Survey Telescope. To tackle this challenge, we introduce CMU DeepLens, a new fully automated galaxy-galaxy lens finding method based on deep learning. This supervised machine learning approach does not require any tuning after the training step which only requires realistic image simulations of strongly lensed systems. We train and validate our model on a set of 20 000 LSST-like mock observations including a range of lensed systems of various sizes and signal-to-noise ratios (S/N). We find on our simulated data set that for a rejection rate of non-lenses of 99 per cent, a completeness of 90 per cent can be achieved for lenses with Einstein radii larger than 1.4 arcsec and S/N larger than 20 on individual g-band LSST exposures. Finally, we emphasize the importance of realistically complex simulations for training such machine learning methods by demonstrating that the performance of models of significantly different complexities cannot be distinguished on simpler simulations. We make our code publicly available at https://github.com/McWilliamsCenter/CMUDeepLens.

  3. Infection prevention and control training and capacity building during the Ebola epidemic in Guinea

    PubMed Central

    Koivogui, Lamine; de Beer, Lindsey; Johnson, Candice Y.; Diaby, Dianka; Ouedraogo, Abdoulaye; Touré, Fatoumata; Bangoura, Fodé Ousmane; Chang, Michelle A.; Chea, Nora; Dotson, Ellen M.; Finlay, Alyssa; Fitter, David; Hamel, Mary J.; Hazim, Carmen; Larzelere, Maribeth; Park, Benjamin J.; Rowe, Alexander K.; Thompson-Paul, Angela M.; Twyman, Anthony; Barry, Moumié; Ntaw, Godlove; Diallo, Alpha Oumar

    2018-01-01

    Background During the 2014–2016 Ebola epidemic in West Africa, a key epidemiological feature was disease transmission within healthcare facilities, indicating a need for infection prevention and control (IPC) training and support. Methods IPC training was provided to frontline healthcare workers (HCW) in healthcare facilities that were not Ebola treatment units, as well as to IPC trainers and IPC supervisors placed in healthcare facilities. Trainings included both didactic and hands-on components, and were assessed using pre-tests, post-tests and practical evaluations. We calculated median percent increase in knowledge. Results From October–December 2014, 20 IPC courses trained 1,625 Guineans: 1,521 HCW, 55 IPC trainers, and 49 IPC supervisors. Median test scores increased 40% (interquartile range [IQR]: 19–86%) among HCW, 15% (IQR: 8–33%) among IPC trainers, and 21% (IQR: 15–30%) among IPC supervisors (all P<0.0001) to post-test scores of 83%, 93%, and 93%, respectively. Conclusions IPC training resulted in clear improvements in knowledge and was feasible in a public health emergency setting. This method of IPC training addressed a high demand among HCW. Valuable lessons were learned to facilitate expansion of IPC training to other prefectures; this model may be considered when responding to other large outbreaks. PMID:29489885

  4. Teamwork Training Needs Analysis for Long-Duration Exploration Missions

    NASA Technical Reports Server (NTRS)

    Smith-Jentsch, Kimberly A.; Sierra, Mary Jane

    2016-01-01

    The success of future long-duration exploration missions (LDEMs) will be determined largely by the extent to which mission-critical personnel possess and effectively exercise essential teamwork competencies throughout the entire mission lifecycle (e.g., Galarza & Holland, 1999; Hysong, Galarza, & Holland, 2007; Noe, Dachner, Saxton, & Keeton, 2011). To ensure that such personnel develop and exercise these necessary teamwork competencies prior to and over the full course of future LDEMs, it is essential that a teamwork training curriculum be developed and put into place at NASA that is both 1) comprehensive, in that it targets all teamwork competencies critical for mission success and 2) structured around empirically-based best practices for enhancing teamwork training effectiveness. In response to this demand, the current teamwork-oriented training needs analysis (TNA) was initiated to 1) identify the teamwork training needs (i.e., essential teamwork-related competencies) of future LDEM crews, 2) identify critical gaps within NASA’s current and future teamwork training curriculum (i.e., gaps in the competencies targeted and in the training practices utilized) that threaten to impact the success of future LDEMs, and to 3) identify a broad set of practical nonprescriptive recommendations for enhancing the effectiveness of NASA’s teamwork training curriculum in order to increase the probability of future LDEM success.

  5. Theoretical analysis and simulation study of the deep overcompression mode of velocity bunching for a comblike electron bunch train

    NASA Astrophysics Data System (ADS)

    Wang, Dan; Yan, Lixin; Du, YingChao; Huang, Wenhui; Gai, Wei; Tang, Chuanxiang

    2018-02-01

    Premodulated comblike electron bunch trains are used in a wide range of research fields, such as for wakefield-based particle acceleration and tunable radiation sources. We propose an optimized compression scheme for bunch trains in which a traveling wave accelerator tube and a downstream drift segment are together used as a compressor. When the phase injected into the accelerator tube for the bunch train is set to ≪-10 0 ° , velocity bunching occurs in a deep overcompression mode, which reverses the phase space and maintains a velocity difference within the injected beam, thereby giving rise to a compressed comblike electron bunch train after a few-meter-long drift segment; we call this the deep overcompression scheme. The main benefits of this scheme are the relatively large phase acceptance and the uniformity of compression for the bunch train. The comblike bunch train generated via this scheme is widely tunable: For the two-bunch case, the energy and time spacings can be continuously adjusted from +1 to -1 MeV and from 13 to 3 ps, respectively, by varying the injected phase of the bunch train from -22 0 ° to -14 0 ° . Both theoretical analysis and beam dynamics simulations are presented to study the properties of the deep overcompression scheme.

  6. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    PubMed

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  7. Brain structural changes following adaptive cognitive training assessed by Tensor-Based Morphometry (TBM).

    PubMed

    Colom, Roberto; Hua, Xue; Martínez, Kenia; Burgaleta, Miguel; Román, Francisco J; Gunter, Jeffrey L; Carmona, Susanna; Jaeggi, Susanne M; Thompson, Paul M

    2016-10-01

    Tensor-Based Morphometry (TBM) allows the automatic mapping of brain changes across time building 3D deformation maps. This technique has been applied for tracking brain degeneration in Alzheimer's and other neurodegenerative diseases with high sensitivity and reliability. Here we applied TBM to quantify changes in brain structure after completing a challenging adaptive cognitive training program based on the n-back task. Twenty-six young women completed twenty-four training sessions across twelve weeks and they showed, on average, large cognitive improvements. High-resolution MRI scans were obtained before and after training. The computed longitudinal deformation maps were analyzed for answering three questions: (a) Are there differential brain structural changes in the training group as compared with a matched control group? (b) Are these changes related to performance differences in the training program? (c) Are standardized changes in a set of psychological factors (fluid and crystallized intelligence, working memory, and attention control) measured before and after training, related to structural changes in the brain? Results showed (a) greater structural changes for the training group in the temporal lobe, (b) a negative correlation between these changes and performance across training sessions (the greater the structural change, the lower the cognitive performance improvements), and (c) negligible effects regarding the psychological factors measured before and after training. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. 49 CFR 232.213 - Extended haul trains.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., DEPARTMENT OF TRANSPORTATION BRAKE SYSTEM SAFETY STANDARDS FOR FREIGHT AND OTHER NON-PASSENGER TRAINS AND... extended haul trains will originate and a description of the trains that will be operated as extended haul.... (5) The train shall have no more than one pick-up and one set-out en route, except for the set-out of...

  9. Improving CNN Performance Accuracies With Min-Max Objective.

    PubMed

    Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning

    2017-06-09

    We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.

  10. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  11. Automatic measurement of voice onset time using discriminative structured prediction.

    PubMed

    Sonderegger, Morgan; Keshet, Joseph

    2012-12-01

    A discriminative large-margin algorithm for automatic measurement of voice onset time (VOT) is described, considered as a case of predicting structured output from speech. Manually labeled data are used to train a function that takes as input a speech segment of an arbitrary length containing a voiceless stop, and outputs its VOT. The function is explicitly trained to minimize the difference between predicted and manually measured VOT; it operates on a set of acoustic feature functions designed based on spectral and temporal cues used by human VOT annotators. The algorithm is applied to initial voiceless stops from four corpora, representing different types of speech. Using several evaluation methods, the algorithm's performance is near human intertranscriber reliability, and compares favorably with previous work. Furthermore, the algorithm's performance is minimally affected by training and testing on different corpora, and remains essentially constant as the amount of training data is reduced to 50-250 manually labeled examples, demonstrating the method's practical applicability to new datasets.

  12. Training giraffe (Giraffa camelopardalis reticulata) for front foot radiographs and hoof care.

    PubMed

    Dadone, Liza I; Schilz, Amy; Friedman, Susan G; Bredahl, Jason; Foxworth, Steve; Chastain, Bob

    2016-05-01

    For a large herd of reticulated giraffes, a mainly operant-based training program was created for front foot radiographs and hoof trims in an effort to diagnose and better manage lameness. Behaviors were shaped in a restricted contact set-up, using a positive reinforcement procedure to teach a series of mastered cued behaviors. This training was used to obtain lateral and lateral oblique front foot radiographs for the entire herd. Radiographs were diagnostic for multiple possible causes of lameness including fractures and osteitis of the distal phalangeal bone, hoof overgrowth, osteoarthritis of the distal interphalangeal joint, rotation of the distal phalangeal bone, sesamoid bone cysts, and sole foreign bodies. By training giraffe for foot radiographs and hoof trims, potential causes of lameness could be identified and better managed. Long-term, the results may help zoos identify best practices for managing and preventing lameness in giraffe. Zoo Biol. 35:228-236, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Building the team for team science

    USGS Publications Warehouse

    Read, Emily K.; O'Rourke, M.; Hong, G. S.; Hanson, P. C.; Winslow, Luke A.; Crowley, S.; Brewer, C. A.; Weathers, K. C.

    2016-01-01

    The ability to effectively exchange information and develop trusting, collaborative relationships across disciplinary boundaries is essential for 21st century scientists charged with solving complex and large-scale societal and environmental challenges, yet these communication skills are rarely taught. Here, we describe an adaptable training program designed to increase the capacity of scientists to engage in information exchange and relationship development in team science settings. A pilot of the program, developed by a leader in ecological network science, the Global Lake Ecological Observatory Network (GLEON), indicates that the training program resulted in improvement in early career scientists’ confidence in team-based network science collaborations within and outside of the program. Fellows in the program navigated human-network challenges, expanded communication skills, and improved their ability to build professional relationships, all in the context of producing collaborative scientific outcomes. Here, we describe the rationale for key communication training elements and provide evidence that such training is effective in building essential team science skills.

  14. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator); Knowlton, D. J.; Dean, M. E.

    1981-01-01

    A set of training statistics for the 30 meter resolution simulated thematic mapper MSS data was generated based on land use/land cover classes. In addition to this supervised data set, a nonsupervised multicluster block of training statistics is being defined in order to compare the classification results and evaluate the effect of the different training selection methods on classification performance. Two test data sets, defined using a stratified sampling procedure incorporating a grid system with dimensions of 50 lines by 50 columns, and another set based on an analyst supervised set of test fields were used to evaluate the classifications of the TMS data. The supervised training data set generated training statistics, and a per point Gaussian maximum likelihood classification of the 1979 TMS data was obtained. The August 1980 MSS data was radiometrically adjusted. The SAR data was redigitized and the SAR imagery was qualitatively analyzed.

  15. Child health in low-resource settings: pathways through UK paediatric training.

    PubMed

    Goenka, Anu; Magnus, Dan; Rehman, Tanya; Williams, Bhanu; Long, Andrew; Allen, Steve J

    2013-11-01

    UK doctors training in paediatrics benefit from experience of child health in low-resource settings. Institutions in low-resource settings reciprocally benefit from hosting UK trainees. A wide variety of opportunities exist for trainees working in low-resource settings including clinical work, research and the development of transferable skills in management, education and training. This article explores a range of pathways for UK trainees to develop experience in low-resource settings. It is important for trainees to start planning a robust rationale early for global child health activities via established pathways, in the interests of their own professional development as well as UK service provision. In the future, run-through paediatric training may include core elements of global child health, as well as designated 'tracks' for those wishing to develop their career in global child health further. Hands-on experience in low-resource settings is a critical component of these training initiatives.

  16. SkData: data sets and algorithm evaluation protocols in Python

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Pinto, Nicolas; Cox, David D.

    2015-01-01

    Machine learning benchmark data sets come in all shapes and sizes, whereas classification algorithms assume sanitized input, such as (x, y) pairs with vector-valued input x and integer class label y. Researchers and practitioners know all too well how tedious it can be to get from the URL of a new data set to a NumPy ndarray suitable for e.g. pandas or sklearn. The SkData library handles that work for a growing number of benchmark data sets (small and large) so that one-off in-house scripts for downloading and parsing data sets can be replaced with library code that is reliable, community-tested, and documented. The SkData library also introduces an open-ended formalization of training and testing protocols that facilitates direct comparison with published research. This paper describes the usage and architecture of the SkData library.

  17. Recognizing human actions by learning and matching shape-motion prototype trees.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2012-03-01

    A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

  18. Improving tRNAscan-SE Annotation Results via Ensemble Classifiers.

    PubMed

    Zou, Quan; Guo, Jiasheng; Ju, Ying; Wu, Meihong; Zeng, Xiangxiang; Hong, Zhiling

    2015-11-01

    tRNAScan-SE is a tRNA detection program that is widely used for tRNA annotation; however, the false positive rate of tRNAScan-SE is unacceptable for large sequences. Here, we used a machine learning method to try to improve the tRNAScan-SE results. A new predictor, tRNA-Predict, was designed. We obtained real and pseudo-tRNA sequences as training data sets using tRNAScan-SE and constructed three different tRNA feature sets. We then set up an ensemble classifier, LibMutil, to predict tRNAs from the training data. The positive data set of 623 tRNA sequences was obtained from tRNAdb 2009 and the negative data set was the false positive tRNAs predicted by tRNAscan-SE. Our in silico experiments revealed a prediction accuracy rate of 95.1 % for tRNA-Predict using 10-fold cross-validation. tRNA-Predict was developed to distinguish functional tRNAs from pseudo-tRNAs rather than to predict tRNAs from a genome-wide scan. However, tRNA-Predict can work with the output of tRNAscan-SE, which is a genome-wide scanning method, to improve the tRNAscan-SE annotation results. The tRNA-Predict web server is accessible at http://datamining.xmu.edu.cn/∼gjs/tRNA-Predict. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Fully automated, real-time 3D ultrasound segmentation to estimate first trimester placental volume using deep learning.

    PubMed

    Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L

    2018-06-07

    We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.

  20. Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine

    PubMed Central

    Yuan, Hua; Huang, Jianping; Cao, Chenzhong

    2009-01-01

    Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT) and murine local lymph node assay (LLNA) are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs) are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers. PMID:19742136

  1. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  2. Toxicity challenges in environmental chemicals: Prediction of ...

    EPA Pesticide Factsheets

    Physiologically based pharmacokinetic (PBPK) models bridge the gap between in vitro assays and in vivo effects by accounting for the adsorption, distribution, metabolism, and excretion of xenobiotics, which is especially useful in the assessment of human toxicity. Quantitative structure-activity relationships (QSAR) serve as a vital tool for the high-throughput prediction of chemical-specific PBPK parameters, such as the fraction of a chemical unbound by plasma protein (Fub). The presented work explores the merit of utilizing experimental pharmaceutical Fub data for the construction of a universal QSAR model, in order to compensate for the limited range of high-quality experimental Fub data for environmentally relevant chemicals, such as pollutants, pesticides, and consumer products. Independent QSAR models were constructed with three machine-learning algorithms, k nearest neighbors (kNN), random forest (RF), and support vector machine (SVM) regression, from a large pharmaceutical training set (~1000) and assessed with independent test sets of pharmaceuticals (~200) and environmentally relevant chemicals in the ToxCast program (~400). Small descriptor sets yielded the optimal balance of model complexity and performance, providing insight into the biochemical factors of plasma protein binding, while preventing over fitting to the training set. Overlaps in chemical space between pharmaceutical and environmental compounds were considered through applicability of do

  3. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  4. Machine Tool Technology. Automatic Screw Machine Troubleshooting & Set-Up Training Outlines [and] Basic Operator's Skills Set List.

    ERIC Educational Resources Information Center

    Anoka-Hennepin Technical Coll., Minneapolis, MN.

    This set of two training outlines and one basic skills set list are designed for a machine tool technology program developed during a project to retrain defense industry workers at risk of job loss or dislocation because of conversion of the defense industry. The first troubleshooting training outline lists the categories of problems that develop…

  5. Smartphone-Based System for Learning and Inferring Hearing Aid Settings

    PubMed Central

    Aldaz, Gabriel; Puria, Sunil; Leifer, Larry J.

    2017-01-01

    Background Previous research has shown that hearing aid wearers can successfully self-train their instruments’ gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the “untrained system,” that is, the manufacturer’s algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The “trained system” first learned each individual’s preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). Purpose To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. Research Design An experimental within-participants study. Participants used a prototype hearing system—comprising two hearing aids, Android smartphone, and body-worn gateway device—for ~6 weeks. Study Sample Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Intervention Participants were fitted and instructed to perform daily comparisons of settings (“listening evaluations”) through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone—including environmental sound classification, sound level, and location—to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system (“trained settings”) to those suggested by the hearing aids’ untrained system (“untrained settings”). Data Collection and Analysis We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Results Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. Conclusions The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone. PMID:27718350

  6. The relationships between internal and external training load models during basketball training.

    PubMed

    Scanlan, Aaron T; Wen, Neal; Tucker, Patrick S; Dalbo, Vincent J

    2014-09-01

    The present investigation described and compared the internal and external training loads during basketball training. Eight semiprofessional male basketball players (mean ± SD, age: 26.3 ± 6.7 years; stature: 188.1 ± 6.2 cm; body mass: 92.0 ± 13.8 kg) were monitored across a 7-week period during the preparatory phase of the annual training plan. A total of 44 total sessions were monitored. Player session ratings of perceived exertion (sRPE), heart rate, and accelerometer data were collected across each training session. Internal training load was determined using the sRPE, training impulse (TRIMP), and summated-heart-rate-zones (SHRZ) training load models. External training load was calculated using an established accelerometer algorithm. Pearson product-moment correlations with 95% confidence intervals (CIs) were used to determine the relationships between internal and external training load models. Significant moderate relationships were observed between external training load and the sRPE (r42 = 0.49, 95% CI = 0.23-0.69, p < 0.001) and TRIMP models (r42 = 0.38, 95% CI = 0.09-0.61, p = 0.011). A significant large correlation was evident between external training load and the SHRZ model (r42 = 0.61, 95% CI = 0.38-0.77, p < 0.001). Although significant relationships were found between internal and external training load models, the magnitude of the correlations and low commonality suggest that internal training load models measure different constructs of the training process than the accelerometer training load model in basketball settings. Basketball coaching and conditioning professionals should not assume a linear dose-response between accelerometer and internal training load models during training and are recommended to combine internal and external approaches when monitoring training load in players.

  7. Dynamic security contingency screening and ranking using neural networks.

    PubMed

    Mansour, Y; Vaahedi, E; El-Sharkawi, M A

    1997-01-01

    This paper summarizes BC Hydro's experience in applying neural networks to dynamic security contingency screening and ranking. The idea is to use the information on the prevailing operating condition and directly provide contingency screening and ranking using a trained neural network. To train the two neural networks for the large scale systems of BC Hydro and Hydro Quebec, in total 1691 detailed transient stability simulation were conducted, 1158 for BC Hydro system and 533 for the Hydro Quebec system. The simulation program was equipped with the energy margin calculation module (second kick) to measure the energy margin in each run. The first set of results showed poor performance for the neural networks in assessing the dynamic security. However a number of corrective measures improved the results significantly. These corrective measures included: 1) the effectiveness of output; 2) the number of outputs; 3) the type of features (static versus dynamic); 4) the number of features; 5) system partitioning; and 6) the ratio of training samples to features. The final results obtained using the large scale systems of BC Hydro and Hydro Quebec demonstrates a good potential for neural network in dynamic security assessment contingency screening and ranking.

  8. Training of Tonal Similarity Ratings in Non-Musicians: A “Rapid Learning” Approach

    PubMed Central

    Oechslin, Mathias S.; Läge, Damian; Vitouch, Oliver

    2012-01-01

    Although cognitive music psychology has a long tradition of expert–novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based “rapid learning” paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, intended to display mental representations, were calculated by means of non-metric multidimensional scaling (NMDS), and were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for learning research in music and other domains. Results are discussed in the context of the “giftedness” debate. PMID:22629252

  9. Issues in the Development and Evaluation of Cross-Cultural Training in a Business Setting.

    ERIC Educational Resources Information Center

    Broadbooks, Wendy J.

    Issues in the development and evaluation of cross-cultural training in a business setting were investigated. Cross-cultural training and cross-cultural evaluation were defined as training and evaluation of training that involve the interaction of participants from two or more different countries. Two evaluations of a management development-type…

  10. Plastic Surgery Response in Natural Disasters.

    PubMed

    Chung, Susan; Zimmerman, Amanda; Gaviria, Andres; Dayicioglu, Deniz

    2015-06-01

    Disasters cause untold damage and are often unpredictable; however, with proper preparation, these events can be better managed. The initial response has the greatest impact on the overall success of the relief effort. A well-trained multidisciplinary network of providers is necessary to ensure coordinated care for the victims of these mass casualty disasters. As members of this network of providers, plastic surgeons have the ability to efficiently address injuries sustained in mass casualty disasters and are a valuable member of the relief effort. The skill set of plastic surgeons includes techniques that can address injuries sustained in large-scale emergencies, such as the management of soft-tissue injury, tissue viability, facial fractures, and extremity salvage. An approach to disaster relief, the types of disasters encountered, the management of injuries related to mass casualty disasters, the role of plastic surgeons in the relief effort, and resource management are discussed. In order to improve preparedness in future mass casualty disasters, plastic surgeons should receive training during residency regarding the utilization of plastic surgery knowledge in the disaster setting.

  11. Generating Focused Molecule Libraries for Drug Discovery with Recurrent Neural Networks

    PubMed Central

    2017-01-01

    In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active toward a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria), it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery. PMID:29392184

  12. Metabolomics for organic food authentication: Results from a long-term field study in carrots.

    PubMed

    Cubero-Leon, Elena; De Rudder, Olivier; Maquet, Alain

    2018-01-15

    Increasing demand for organic products and their premium prices make them an attractive target for fraudulent malpractices. In this study, a large-scale comparative metabolomics approach was applied to investigate the effect of the agronomic production system on the metabolite composition of carrots and to build statistical models for prediction purposes. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA) was applied successfully to predict the origin of the agricultural system of the harvested carrots on the basis of features determined by liquid chromatography-mass spectrometry. When the training set used to build the OPLS-DA models contained samples representative of each harvest year, the models were able to classify unknown samples correctly (100% correct classification). If a harvest year was left out of the training sets and used for predictions, the correct classification rates achieved ranged from 76% to 100%. The results therefore highlight the potential of metabolomic fingerprinting for organic food authentication purposes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. geneCommittee: a web-based tool for extensively testing the discriminatory power of biologically relevant gene sets in microarray data classification.

    PubMed

    Reboiro-Jato, Miguel; Arrais, Joel P; Oliveira, José Luis; Fdez-Riverola, Florentino

    2014-01-30

    The diagnosis and prognosis of several diseases can be shortened through the use of different large-scale genome experiments. In this context, microarrays can generate expression data for a huge set of genes. However, to obtain solid statistical evidence from the resulting data, it is necessary to train and to validate many classification techniques in order to find the best discriminative method. This is a time-consuming process that normally depends on intricate statistical tools. geneCommittee is a web-based interactive tool for routinely evaluating the discriminative classification power of custom hypothesis in the form of biologically relevant gene sets. While the user can work with different gene set collections and several microarray data files to configure specific classification experiments, the tool is able to run several tests in parallel. Provided with a straightforward and intuitive interface, geneCommittee is able to render valuable information for diagnostic analyses and clinical management decisions based on systematically evaluating custom hypothesis over different data sets using complementary classifiers, a key aspect in clinical research. geneCommittee allows the enrichment of microarrays raw data with gene functional annotations, producing integrated datasets that simplify the construction of better discriminative hypothesis, and allows the creation of a set of complementary classifiers. The trained committees can then be used for clinical research and diagnosis. Full documentation including common use cases and guided analysis workflows is freely available at http://sing.ei.uvigo.es/GC/.

  14. Vehicle classification in WAMI imagery using deep network

    NASA Astrophysics Data System (ADS)

    Yi, Meng; Yang, Fan; Blasch, Erik; Sheaff, Carolyn; Liu, Kui; Chen, Genshe; Ling, Haibin

    2016-05-01

    Humans have always had a keen interest in understanding activities and the surrounding environment for mobility, communication, and survival. Thanks to recent progress in photography and breakthroughs in aviation, we are now able to capture tens of megapixels of ground imagery, namely Wide Area Motion Imagery (WAMI), at multiple frames per second from unmanned aerial vehicles (UAVs). WAMI serves as a great source for many applications, including security, urban planning and route planning. These applications require fast and accurate image understanding which is time consuming for humans, due to the large data volume and city-scale area coverage. Therefore, automatic processing and understanding of WAMI imagery has been gaining attention in both industry and the research community. This paper focuses on an essential step in WAMI imagery analysis, namely vehicle classification. That is, deciding whether a certain image patch contains a vehicle or not. We collect a set of positive and negative sample image patches, for training and testing the detector. Positive samples are 64 × 64 image patches centered on annotated vehicles. We generate two sets of negative images. The first set is generated from positive images with some location shift. The second set of negative patches is generated from randomly sampled patches. We also discard those patches if a vehicle accidentally locates at the center. Both positive and negative samples are randomly divided into 9000 training images and 3000 testing images. We propose to train a deep convolution network for classifying these patches. The classifier is based on a pre-trained AlexNet Model in the Caffe library, with an adapted loss function for vehicle classification. The performance of our classifier is compared to several traditional image classifier methods using Support Vector Machine (SVM) and Histogram of Oriented Gradient (HOG) features. While the SVM+HOG method achieves an accuracy of 91.2%, the accuracy of our deep network-based classifier reaches 97.9%.

  15. Understanding and motivating health care employees: integrating Maslow's hierarchy of needs, training and technology.

    PubMed

    Benson, Suzanne G; Dundis, Stephen P

    2003-09-01

    This paper applies Maslow's Hierarchy of Needs Model to the challenges of understanding and motivating employees in a rapidly changing health care industry. The perspective that Maslow's Model brings is an essential element that should be considered as the health care arena is faced with reorganization, re-engineering, mergers, acquisitions, increases in learning demands, and the escalating role of technology in training. This paper offers a new perspective related to how Maslow's Model, as used in business/organizational settings, can be directly related to current workforce concerns: the need for security and freedom from stress, social belongingness, self-esteem, self-actualization, altered work/social environments, and new opportunities for learning and self-definition. Changes in health care will continue at an accelerated pace and with these changes will come the need for more and more training. The use of technology in training has heightened access, faster distribution, innovation and increased collaboration. However, with this technology come attendant challenges including keeping up with the technology, the increased pace of training, depersonalization, and fear of the unknown. The Maslow model provides a means for understanding these challenges in terms of universal individual needs. How does one motivate employees in the face of increased demands, particularly when they are being asked to meet these demands with fewer resources? The answer is, in large part, to make the employee feel secure, needed, and appreciated. This is not at all easy, but if leaders take into consideration the needs of the individual, the new technology that provides challenges and opportunities for meeting those needs, and provides the training to meet both sets of needs, enhanced employee motivation and commitment is possible.

  16. Effect of Gravity on Robot-Assisted Motor Training After Chronic Stroke: A Randomized Trial

    PubMed Central

    Conroy, Susan S.; Whitall, Jill; Dipietro, Laura; Jones-Lush, Lauren M.; Zhan, Min; Finley, Margaret A.; Wittenberg, George F.; Krebs, Hermano I.; Bever, Christopher T.

    2015-01-01

    Objectives To determine the efficacy of 2 distinct 6-week robot-assisted reaching programs compared with an intensive conventional arm exercise program (ICAE) for chronic, stroke-related upper-extremity (UE) impairment. To examine whether the addition of robot-assisted training out of the horizontal plane leads to improved outcomes. Design Randomized controlled trial, single-blinded, with 12-week follow-up. Setting Research setting in a large medical center. Participants Adults (N=62) with chronic, stroke-related arm weakness stratified by impairment severity using baseline UE motor assessments. Interventions Sixty minutes, 3 times a week for 6 weeks of robot-assisted planar reaching (gravity compensated), combined planar with vertical robot-assisted reaching, or intensive conventional arm exercise program. Main Outcome Measure UE Fugl-Meyer Assessment (FMA) mean change from baseline to final training. Results All groups showed modest gains in the FMA from baseline to final with no significant between group differences. Most change occurred in the planar robot group (mean change ± SD, 2.94± 0.77; 95% confidence interval [CI], 1.40 – 4.47). Participants with greater motor impairment (n=41) demonstrated a larger difference in response (mean change ± SD, 2.29±0.72; 95% CI, 0.85–3.72) for planar robot-assisted exercise compared with the intensive conventional arm exercise program (mean change ± SD, 0.43±0.72; 95% CI, −1.00 to 1.86). Conclusions Chronic UE deficits because of stroke are responsive to intensive motor task training. However, training outside the horizontal plane in a gravity present environment using a combination of vertical with planar robots was not superior to training with the planar robot alone. PMID:21849168

  17. Management of Cold Water-induced Hypothermia: A Simulation Scenario for Layperson Training Delivered via a Mobile Tele-simulation Unit

    PubMed Central

    Parsons, Michael

    2017-01-01

    Newfoundland and Labrador (NL) has one of the highest provincial drowning rates in Canada, largely due to the many rural communities located near bodies of water. Factor in the province’s cold climate (average NL’s freshwater temperature is below 5.4°C)and the prevalence of winter recreational activities among the population, there exists an inherent risk of ice-related injuries and subsequent hypothermia. Oftentimes, these injuries occur in remote/rural settings where immediate support from Emergency Medical Services (EMS) may not be available. During this critical period, it frequently falls on individuals without formal healthcare training to provide lifesaving measures until help arrives. Training individuals in rural communities plays an important role in ensuring public safety. In recent years, simulation-based education has become an essential tool in medical, marine and first aid training. It provides learners with a safe environment to hone their skills and has been shown to be superior to traditional clinical teaching methods. The following case aims to train laypeople from rural settings in the immediate management of an individual who becomes hypothermic following immersion into cold water. However, reaching these individuals to provide training can be a challenge in a province with such a vast geography. To assist with overcoming this, the development of a simulation center that is portable between communities (or Mobile Tele-Simulation Unit) has occurred. By utilizing modern technology, this paper also proposes an innovative method of connecting with learners in more difficult to reach regions. PMID:29503784

  18. A survey-based cross-sectional study of doctors’ expectations and experiences of non-technical skills for Out of Hours work

    PubMed Central

    Brown, Michael; Shaw, Dominick; Sharples, Sarah; Jeune, Ivan Le; Blakey, John

    2015-01-01

    Objectives The skill set required for junior doctors to work efficiently and safely Out of Hours (OoH) in hospitals has not been established. This is despite the OoH period representing 75% of the year and it being the time of highest mortality. We set out to explore the expectations of medical students and experiences of junior doctors of the non-technical skills needed to work OoH. Design Survey-based cross-sectional study informed by focus groups. Setting Online survey with participants from five large teaching hospitals across the UK. Participants 300 Medical Students and Doctors Outcome measure Participants ranked the importance of non-technical skills, as identified by literature review and focus groups, needed for OoH care. Results The focus groups revealed a total of eight non-technical skills deemed to be important. In the survey ‘Task Prioritisation’ (mean rank 1.617) was consistently identified as the most important non-technical skill. Stage of training affected the ranking of skills, with significant differences for ‘Communication with Senior Doctors’, ‘Dealing with Clinical Isolation’, ‘Task Prioritisation’ and ‘Communication with Patients’. Importantly, there was a significant discrepancy between the medical student expectations and experiences of doctors undertaking work. Conclusions Our findings suggest that medical staff particularly value task prioritisation skills; however, these are not routinely taught in medical schools. The discrepancy between expectations of students and experience of doctors reinforces the idea that there is a gap in training. Doctors of different grades place different importance on specific non-technical skills with implications for postgraduate training. There is a pressing need for medical schools and deaneries to review non-technical training to include more than communication skills. PMID:25687899

  19. Automated gastric cancer diagnosis on H&E-stained sections; ltraining a classifier on a large scale with multiple instance machine learning

    NASA Astrophysics Data System (ADS)

    Cosatto, Eric; Laquerre, Pierre-Francois; Malon, Christopher; Graf, Hans-Peter; Saito, Akira; Kiyuna, Tomoharu; Marugame, Atsushi; Kamijo, Ken'ichi

    2013-03-01

    We present a system that detects cancer on slides of gastric tissue sections stained with hematoxylin and eosin (H&E). At its heart is a classi er trained using the semi-supervised multi-instance learning framework (MIL) where each tissue is represented by a set of regions-of-interest (ROI) and a single label. Such labels are readily obtained because pathologists diagnose each tissue independently as part of the normal clinical work ow. From a large dataset of over 26K gastric tissue sections from over 12K patients obtained from a clinical load spanning several months, we train a MIL classi er on a patient-level partition of the dataset (2/3 of the patients) and obtain a very high performance of 96% (AUC), tested on the remaining 1/3 never-seen before patients (over 8K tissues). We show this level of performance to match the more costly supervised approach where individual ROIs need to be labeled manually. The large amount of data used to train this system gives us con dence in its robustness and that it can be safely used in a clinical setting. We demonstrate how it can improve the clinical work ow when used for pre-screening or quality control. For pre-screening, the system can diagnose 47% of the tissues with a very low likelihood (< 1%) of missing cancers, thus halving the clinicians' caseload. For quality control, compared to random rechecking of 33% of the cases, the system achieves a three-fold increase in the likelihood of catching cancers missed by pathologists. The system is currently in regular use at independent pathology labs in Japan where it is used to double-check clinician's diagnoses. At the end of 2012 it will have analyzed over 80,000 slides of gastric and colorectal samples (200,000 tissues).

  20. Optimization of genomic selection training populations with a genetic algorithm

    USDA-ARS?s Scientific Manuscript database

    In this article, we derive a computationally efficient statistic to measure the reliability of estimates of genetic breeding values for a fixed set of genotypes based on a given training set of genotypes and phenotypes. We adopt a genetic algorithm scheme to find a training set of certain size from ...

  1. Knowledge based word-concept model estimation and refinement for biomedical text mining.

    PubMed

    Jimeno Yepes, Antonio; Berlanga, Rafael

    2015-02-01

    Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Clinical Supervision of Mental Health Professionals Serving Youth: Format and Microskills.

    PubMed

    Bailin, Abby; Bearman, Sarah Kate; Sale, Rafaella

    2018-03-21

    Clinical supervision is an element of quality assurance in routine mental health care settings serving children; however, there is limited scientific evaluation of its components. This study examines the format and microskills of routine supervision. Supervisors (n = 13) and supervisees (n = 20) reported on 100 supervision sessions, and trained coders completed observational coding on a subset of recorded sessions (n = 57). Results indicate that microskills shown to enhance supervisee competency in effectiveness trials and experiments were largely absent from routine supervision, highlighting potential missed opportunities to impart knowledge to therapists. Findings suggest areas for quality improvement within routine care settings.

  3. "East side story": on being an epidemiologist in the former USSR: an interview with Marcus Klingberg.

    PubMed

    Klingberg, Marcus

    2006-01-01

    Marcus Klingberg was born on 7 October 1918, in Warsaw, Poland, into a Hasidic, rabbinical family. After the Nazi invasion of Poland, he escaped to the USSR where he trained and worked as an epidemiologist from 1939 to 1945. For 35 years after the war, he continued his professional work in Israel. The harsh conditions within the Soviet Union during World War II provided a challenging setting for epidemiologic work-a setting that has remained largely hidden from Western view. In this interview, Klingberg describes his work as an epidemiologist in the USSR and his subsequent encounter with Western epidemiology.

  4. Training a whole-book LSTM-based recognizer with an optimal training set

    NASA Astrophysics Data System (ADS)

    Soheili, Mohammad Reza; Yousefi, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2018-04-01

    Despite the recent progress in OCR technologies, whole-book recognition, is still a challenging task, in particular in case of old and historical books, that the unknown font faces or low quality of paper and print contributes to the challenge. Therefore, pre-trained recognizers and generic methods do not usually perform up to required standards, and usually the performance degrades for larger scale recognition tasks, such as of a book. Such reportedly low error-rate methods turn out to require a great deal of manual correction. Generally, such methodologies do not make effective use of concepts such redundancy in whole-book recognition. In this work, we propose to train Long Short Term Memory (LSTM) networks on a minimal training set obtained from the book to be recognized. We show that clustering all the sub-words in the book, and using the sub-word cluster centers as the training set for the LSTM network, we can train models that outperform any identical network that is trained with randomly selected pages of the book. In our experiments, we also show that although the sub-word cluster centers are equivalent to about 8 pages of text for a 101- page book, a LSTM network trained on such a set performs competitively compared to an identical network that is trained on a set of 60 randomly selected pages of the book.

  5. Mental health first aid training for nursing students: a protocol for a pragmatic randomised controlled trial in a large university.

    PubMed

    Crawford, Gemma; Burns, Sharyn K; Chih, Hui Jun; Hunt, Kristen; Tilley, P J Matt; Hallett, Jonathan; Coleman, Kim; Smith, Sonya

    2015-02-19

    The impact of mental health problems and disorders in Australia is significant. Mental health problems often start early and disproportionately affect young people. Poor adolescent mental health can predict educational achievement at school and educational and occupational attainment in adulthood. Many young people attend higher education and have been found to experience a range of mental health issues. The university setting therefore presents a unique opportunity to trial interventions to reduce the burden of mental health problems. Mental Health First Aid (MHFA) aims to train participants to recognise symptoms of mental health problems and assist an individual who may be experiencing a mental health crisis. Training nursing students in MHFA may increase mental health literacy and decrease stigma in the student population. This paper presents a protocol for a trial to examine the efficacy of the MHFA training for students studying nursing at a large university in Perth, Western Australia. This randomised controlled trial will follow the CONSORT guidelines. Participants will be randomly allocated to the intervention group (receiving a MHFA training course comprising two face to face 6.5 hour sessions run over two days during the intervention period) or a waitlisted control group (not receiving MHFA training during the study). The source population will be undergraduate nursing students at a large university located in Perth, Western Australia. Efficacy of the MHFA training will be assessed by following the intention-to-treat principle and repeated measures analysis. Given the known burden of mental health disorders among student populations, it is important universities consider effective strategies to address mental health issues. Providing MHFA training to students offers the advantage of increasing mental health literacy, among the student population. Further, students trained in MHFA are likely to utilise these skills in the broader community, when they graduate to the workforce. It is anticipated that this trial will demonstrate the scalability of MHFA in the university environment for pre-service nurses and that implementation of MHFA courses, with comprehensive evaluation, could yield positive improvements in the mental health literacy amongst this target group as well as other tertiary student groups. Australian New Zealand Clinical Trials Registry ACTRN12614000861651 .

  6. A novel method of language modeling for automatic captioning in TC video teleconferencing.

    PubMed

    Zhang, Xiaojia; Zhao, Yunxin; Schopp, Laura

    2007-05-01

    We are developing an automatic captioning system for teleconsultation video teleconferencing (TC-VTC) in telemedicine, based on large vocabulary conversational speech recognition. In TC-VTC, doctors' speech contains a large number of infrequently used medical terms in spontaneous styles. Due to insufficiency of data, we adopted mixture language modeling, with models trained from several datasets of medical and nonmedical domains. This paper proposes novel modeling and estimation methods for the mixture language model (LM). Component LMs are trained from individual datasets, with class n-gram LMs trained from in-domain datasets and word n-gram LMs trained from out-of-domain datasets, and they are interpolated into a mixture LM. For class LMs, semantic categories are used for class definition on medical terms, names, and digits. The interpolation weights of a mixture LM are estimated by a greedy algorithm of forward weight adjustment (FWA). The proposed mixing of in-domain class LMs and out-of-domain word LMs, the semantic definitions of word classes, as well as the weight-estimation algorithm of FWA are effective on the TC-VTC task. As compared with using mixtures of word LMs with weights estimated by the conventional expectation-maximization algorithm, the proposed methods led to a 21% reduction of perplexity on test sets of five doctors, which translated into improvements of captioning accuracy.

  7. Segmentation of the hippocampus by transferring algorithmic knowledge for large cohort processing.

    PubMed

    Thyreau, Benjamin; Sato, Kazunori; Fukuda, Hiroshi; Taki, Yasuyuki

    2018-01-01

    The hippocampus is a particularly interesting target for neuroscience research studies due to its essential role within the human brain. In large human cohort studies, bilateral hippocampal structures are frequently identified and measured to gain insight into human behaviour or genomic variability in neuropsychiatric disorders of interest. Automatic segmentation is performed using various algorithms, with FreeSurfer being a popular option. In this manuscript, we present a method to segment the bilateral hippocampus using a deep-learned appearance model. Deep convolutional neural networks (ConvNets) have shown great success in recent years, due to their ability to learn meaningful features from a mass of training data. Our method relies on the following key novelties: (i) we use a wide and variable training set coming from multiple cohorts (ii) our training labels come in part from the output of the FreeSurfer algorithm, and (iii) we include synthetic data and use a powerful data augmentation scheme. Our method proves to be robust, and it has fast inference (<30s total per subject), with trained model available online (https://github.com/bthyreau/hippodeep). We depict illustrative results and show extensive qualitative and quantitative cohort-wide comparisons with FreeSurfer. Our work demonstrates that deep neural-network methods can easily encode, and even improve, existing anatomical knowledge, even when this knowledge exists in algorithmic form. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Lay Health Influencers: How They Tailor Brief Tobacco Cessation Interventions

    PubMed Central

    Yuan, Nicole P.; Castañeda, Heide; Nichter, Mark; Nichter, Mimi; Wind, Steven; Carruth, Lauren; Muramoto, Myra

    2014-01-01

    Interventions tailored to individual smoker characteristics have increasingly received attention in the tobacco control literature. The majority of tailored interventions are generated by computers and administered with printed materials or Web-based programs. The purpose of this study was to examine the tailoring activities of community lay health influencers who were trained to perform face-to-face brief tobacco cessation interventions. Eighty participants of a large-scale, randomized controlled trial completed a 6-week qualitative follow-up interview. A majority of participants (86%) reported that they made adjustments in their intervention behaviors based on individual smoker characteristics, their relationship with the smoker, and/or setting. Situational contexts (i.e., location and timing) primarily played a role after targeted smokers were selected. The findings suggest that lay health influencers benefit from a training curriculum that emphasizes a motivational, person-centered approach to brief cessation interventions. Recommendations for future tobacco cessation intervention trainings are presented. PMID:21986244

  9. Lay health influencers: how they tailor brief tobacco cessation interventions.

    PubMed

    Yuan, Nicole P; Castañeda, Heide; Nichter, Mark; Nichter, Mimi; Wind, Steven; Carruth, Lauren; Muramoto, Myra

    2012-10-01

    Interventions tailored to individual smoker characteristics have increasingly received attention in the tobacco control literature. The majority of tailored interventions are generated by computers and administered with printed materials or web-based programs. The purpose of this study was to examine the tailoring activities of community lay health influencers who were trained to perform face-to-face brief tobacco cessation interventions. Eighty participants of a large-scale, randomized controlled trial completed a 6-week qualitative follow-up interview. A majority of participants (86%) reported that they made adjustments in their intervention behaviors based on individual smoker characteristics, their relationship with the smoker, and/or setting. Situational contexts (i.e., location and timing) primarily played a role after targeted smokers were selected. The findings suggest that lay health influencers benefit from a training curriculum that emphasizes a motivational, person-centered approach to brief cessation interventions. Recommendations for future tobacco cessation intervention trainings are presented.

  10. DESCRIPTIVE ANALYSIS OF DIVALENT SALTS

    PubMed Central

    YANG, HEIDI HAI-LING; LAWLESS, HARRY T.

    2005-01-01

    Many divalent salts (e.g., calcium, iron, zinc), have important nutritional value and are used to fortify food or as dietary supplements. Sensory characterization of some divalent salts in aqueous solutions by untrained judges has been reported in the psychophysical literature, but formal sensory evaluation by trained panels is lacking. To provide this information, a trained descriptive panel evaluated the sensory characteristics of 10 divalent salts including ferrous sulfate, chloride and gluconate; calcium chloride, lactate and glycerophosphate; zinc sulfate and chloride; and magnesium sulfate and chloride. Among the compounds tested, iron compounds were highest in metallic taste; zinc compounds had higher astringency and a glutamate-like sensation; and bitterness was pronounced for magnesium and calcium salts. Bitterness was affected by the anion in ferrous and calcium salts. Results from the trained panelists were largely consistent with the psychophysical literature using untrained judges, but provided a more comprehensive set of oral sensory attributes. PMID:16614749

  11. Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data.

    PubMed

    Sun, Wenqing; Tseng, Tzu-Liang Bill; Zhang, Jianying; Qian, Wei

    2017-04-01

    In this study we developed a graph based semi-supervised learning (SSL) scheme using deep convolutional neural network (CNN) for breast cancer diagnosis. CNN usually needs a large amount of labeled data for training and fine tuning the parameters, and our proposed scheme only requires a small portion of labeled data in training set. Four modules were included in the diagnosis system: data weighing, feature selection, dividing co-training data labeling, and CNN. 3158 region of interests (ROIs) with each containing a mass extracted from 1874 pairs of mammogram images were used for this study. Among them 100 ROIs were treated as labeled data while the rest were treated as unlabeled. The area under the curve (AUC) observed in our study was 0.8818, and the accuracy of CNN is 0.8243 using the mixed labeled and unlabeled data. Copyright © 2016. Published by Elsevier Ltd.

  12. Perceptual learning effect on decision and confidence thresholds.

    PubMed

    Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano

    2016-10-01

    Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. The EPOCH Project. I. Periodic variable stars in the EROS-2 LMC database

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Won; Protopapas, Pavlos; Bailer-Jones, Coryn A. L.; Byun, Yong-Ik; Chang, Seo-Won; Marquette, Jean-Baptiste; Shin, Min-Su

    2014-06-01

    The EPOCH (EROS-2 periodic variable star classification using machine learning) project aims to detect periodic variable stars in the EROS-2 light curve database. In this paper, we present the first result of the classification of periodic variable stars in the EROS-2 LMC database. To classify these variables, we first built a training set by compiling known variables in the Large Magellanic Cloud area from the OGLE and MACHO surveys. We crossmatched these variables with the EROS-2 sources and extracted 22 variability features from 28 392 light curves of the corresponding EROS-2 sources. We then used the random forest method to classify the EROS-2 sources in the training set. We designed the model to separate not only δ Scuti stars, RR Lyraes, Cepheids, eclipsing binaries, and long-period variables, the superclasses, but also their subclasses, such as RRab, RRc, RRd, and RRe for RR Lyraes, and similarly for the other variable types. The model trained using only the superclasses shows 99% recall and precision, while the model trained on all subclasses shows 87% recall and precision. We applied the trained model to the entire EROS-2 LMC database, which contains about 29 million sources, and found 117 234 periodic variable candidates. Out of these 117 234 periodic variables, 55 285 have not been discovered by either OGLE or MACHO variability studies. This set comprises 1906 δ Scuti stars, 6607 RR Lyraes, 638 Cepheids, 178 Type II Cepheids, 34 562 eclipsing binaries, and 11 394 long-period variables. catalog of these EROS-2 LMC periodic variable stars is available at http://stardb.yonsei.ac.kr and at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A43

  14. Recognition of Modified Conditioning Sounds by Competitively Trained Guinea Pigs

    PubMed Central

    Ojima, Hisayuki; Horikawa, Junsei

    2016-01-01

    The guinea pig (GP) is an often-used species in hearing research. However, behavioral studies are rare, especially in the context of sound recognition, because of difficulties in training these animals. We examined sound recognition in a social competitive setting in order to examine whether this setting could be used as an easy model. Two starved GPs were placed in the same training arena and compelled to compete for food after hearing a conditioning sound (CS), which was a repeat of almost identical sound segments. Through a 2-week intensive training, animals were trained to demonstrate a set of distinct behaviors solely to the CS. Then, each of them was subjected to generalization tests for recognition of sounds that had been modified from the CS in spectral, fine temporal and tempo (i.e., intersegment interval, ISI) dimensions. Results showed that they discriminated between the CS and band-rejected test sounds but had no preference for a particular frequency range for the recognition. In contrast, sounds modified in the fine temporal domain were largely perceived to be in the same category as the CS, except for the test sound generated by fully reversing the CS in time. Animals also discriminated sounds played at different tempos. Test sounds with ISIs shorter than that of the multi-segment CS were discriminated from the CS, while test sounds with ISIs longer than that of the CS segments were not. For the shorter ISIs, most animals initiated apparently positive food-access behavior as they did in response to the CS, but discontinued it during the sound-on period probably because of later recognition of tempo. Interestingly, the population range and mean of the delay time before animals initiated the food-access behavior were very similar among different ISI test sounds. This study, for the first time, demonstrates a wide aspect of sound discrimination abilities of the GP and will provide a way to examine tempo perception mechanisms using this animal species. PMID:26858617

  15. In silico prediction of toxicity of phenols to Tetrahymena pyriformis by using genetic algorithm and decision tree-based modeling approach.

    PubMed

    Abbasitabar, Fatemeh; Zare-Shahabadi, Vahid

    2017-04-01

    Risk assessment of chemicals is an important issue in environmental protection; however, there is a huge lack of experimental data for a large number of end-points. The experimental determination of toxicity of chemicals involves high costs and time-consuming process. In silico tools such as quantitative structure-toxicity relationship (QSTR) models, which are constructed on the basis of computational molecular descriptors, can predict missing data for toxic end-points for existing or even not yet synthesized chemicals. Phenol derivatives are known to be aquatic pollutants. With this background, we aimed to develop an accurate and reliable QSTR model for the prediction of toxicity of 206 phenols to Tetrahymena pyriformis. A multiple linear regression (MLR)-based QSTR was obtained using a powerful descriptor selection tool named Memorized_ACO algorithm. Statistical parameters of the model were 0.72 and 0.68 for R training 2 and R test 2 , respectively. To develop a high-quality QSTR model, classification and regression tree (CART) was employed. Two approaches were considered: (1) phenols were classified into different modes of action using CART and (2) the phenols in the training set were partitioned to several subsets by a tree in such a manner that in each subset, a high-quality MLR could be developed. For the first approach, the statistical parameters of the resultant QSTR model were improved to 0.83 and 0.75 for R training 2 and R test 2 , respectively. Genetic algorithm was employed in the second approach to obtain an optimal tree, and it was shown that the final QSTR model provided excellent prediction accuracy for the training and test sets (R training 2 and R test 2 were 0.91 and 0.93, respectively). The mean absolute error for the test set was computed as 0.1615. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Effects of interset whole-body vibration on bench press resistance training in trained and untrained individuals.

    PubMed

    Timon, Rafael; Collado-Mateo, Daniel; Olcina, Guillermo; Gusi, Narcis

    2016-03-01

    Previous studies have demonstrated positive effects of acute vibration exercise on concentric strength and power, but few have observed the effects of vibration exposure on resistance training. The aim of this study was to verify the effects of whole body vibration applied to the chest via hands on bench press resistance training in trained and untrained individuals. Nineteen participants (10 recreationally trained bodybuilders and 9 untrained students) performed two randomized sessions of resistance training on separate days. Each strength session consisted of 3 bench press sets with a load of 75% 1RM to failure in each set, with 2 minutes' rest between sets. All subjects performed the same strength training with either, vibration exposure (12 Hz, 4 mm) of 30 seconds immediately before each bench press set or without vibration. Number of total repetitions, kinematic parameters, blood lactate and perceived exertion were analyzed. In the untrained group, vibration exposure caused a significant increase in the mean velocity (from 0.36±0.02 to 0.39±0.03 m/s) and acceleration (from 0.75±0.10 to 0.86±0.09 m/s2), as well as a decrease in perceived effort (from 8±0.57 to 7.35±0.47) in the first bench press set, but no change was observed in the third bench press set. In the recreationally trained bodybuilders, vibration exposure did not cause any improvement on the performance of bench press resistance training. These results suggest that vibration exposure applied just before the bench press exercise could be a good practice to be implemented by untrained individuals in resistance training.

  17. A prediction model of drug-induced ototoxicity developed by an optimal support vector machine (SVM) method.

    PubMed

    Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong

    2014-08-01

    Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. hERG blocking potential of acids and zwitterions characterized by three thresholds for acidity, size and reactivity.

    PubMed

    Nikolov, Nikolai G; Dybdahl, Marianne; Jónsdóttir, Svava Ó; Wedebye, Eva B

    2014-11-01

    Ionization is a key factor in hERG K(+) channel blocking, and acids and zwitterions are known to be less probable hERG blockers than bases and neutral compounds. However, a considerable number of acidic compounds block hERG, and the physico-chemical attributes which discriminate acidic blockers from acidic non-blockers have not been fully elucidated. We propose a rule for prediction of hERG blocking by acids and zwitterionic ampholytes based on thresholds for only three descriptors related to acidity, size and reactivity. The training set of 153 acids and zwitterionic ampholytes was predicted with a concordance of 91% by a decision tree based on the rule. Two external validations were performed with sets of 35 and 48 observations, respectively, both showing concordances of 91%. In addition, a global QSAR model of hERG blocking was constructed based on a large diverse training set of 1374 chemicals covering all ionization classes, externally validated showing high predictivity and compared to the decision tree. The decision tree was found to be superior for the acids and zwitterionic ampholytes classes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    NASA Astrophysics Data System (ADS)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  20. The effectiveness of three sets of school-based instructional materials and community training on the acquisition and generalization of community laundry skills by students with severe handicaps.

    PubMed

    Morrow, S A; Bates, P E

    1987-01-01

    This study examined the effectiveness of three sets of school-based instructional materials and community training on acquisition and generalization of a community laundry skill by nine students with severe handicaps. School-based instruction involved artificial materials (pictures), simulated materials (cardboard replica of a community washing machine), and natural materials (modified home model washing machine). Generalization assessments were conducted at two different community laundromats, on two machines represented fully by the school-based instructional materials and two machines not represented fully by these materials. After three phases of school-based instruction, the students were provided ten community training trials in one laundromat setting and a final assessment was conducted in both the trained and untrained community settings. A multiple probe design across students was used to evaluate the effectiveness of the three types of school instruction and community training. After systematic training, most of the students increased their laundry performance with all three sets of school-based materials; however, generalization of these acquired skills was limited in the two community settings. Direct training in one of the community settings resulted in more efficient acquisition of the laundry skills and enhanced generalization to the untrained laundromat setting for most of the students. Results of this study are discussed in regard to the issue of school versus community-based instruction and recommendations are made for future research in this area.

  1. Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers

    PubMed Central

    2018-01-01

    Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512

  2. Towards harmonized seismic analysis across Europe using supervised machine learning approaches

    NASA Astrophysics Data System (ADS)

    Zaccarelli, Riccardo; Bindi, Dino; Cotton, Fabrice; Strollo, Angelo

    2017-04-01

    In the framework of the Thematic Core Services for Seismology of EPOS-IP (European Plate Observing System-Implementation Phase), a service for disseminating a regionalized logic-tree of ground motions models for Europe is under development. While for the Mediterranean area the large availability of strong motion data qualified and disseminated through the Engineering Strong Motion database (ESM-EPOS), supports the development of both selection criteria and ground motion models, for the low-to-moderate seismic regions of continental Europe the development of ad-hoc models using weak motion recordings of moderate earthquakes is unavoidable. Aim of this work is to present a platform for creating application-oriented earthquake databases by retrieving information from EIDA (European Integrated Data Archive) and applying supervised learning models for earthquake records selection and processing suitable for any specific application of interest. Supervised learning models, i.e. the task of inferring a function from labelled training data, have been extensively used in several fields such as spam detection, speech and image recognition and in general pattern recognition. Their suitability to detect anomalies and perform a semi- to fully- automated filtering on large waveform data set easing the effort of (or replacing) human expertise is therefore straightforward. Being supervised learning algorithms capable of learning from a relatively small training set to predict and categorize unseen data, its advantage when processing large amount of data is crucial. Moreover, their intrinsic ability to make data driven predictions makes them suitable (and preferable) in those cases where explicit algorithms for detection might be unfeasible or too heuristic. In this study, we consider relatively simple statistical classifiers (e.g., Naive Bayes, Logistic Regression, Random Forest, SVMs) where label are assigned to waveform data based on "recognized classes" needed for our use case. These classes might be a simply binary case (e.g., "good for analysis" vs "bad") or more complex one (e.g., "good for analysis" vs "low SNR", "multi-event", "bad coda envelope"). It is important to stress the fact that our approach can be generalized to any use case providing, as in any supervised approach, an adequate training set of labelled data, a feature-set, a statistical classifier, and finally model validation and evaluation. Examples of use cases considered to develop the system prototype are the characterization of the ground motion in low seismic areas; harmonized spectral analysis across Europe for source and attenuation studies; magnitude calibration; coda analysis for attenuation studies.

  3. The Effects of Transfer in Teaching Vocabulary to School Children: An Analysis of the Dependencies between Lists of Trained and Non-Trained Words

    ERIC Educational Resources Information Center

    Frost, Jørgen; Ottem, Ernst; Hagtvet, Bente E.; Snow, Catherine E.

    2016-01-01

    In the present study, 81 Norwegian students were taught the meaning of words by the Word Generation (WG) method and 51 Norwegian students were taught by an approach inspired by the Thinking Schools (TS) concept. Two sets of words were used: a set of words to be trained and a set of non-trained control words. The two teaching methods yielded no…

  4. Preventing a Relapse or Setting Goals? Elucidating the Impact of Post-Training Transfer Interventions on Training Transfer Performance

    ERIC Educational Resources Information Center

    Rahyuda, Agoes Ganesha; Soltani, Ebrahim; Syed, Jawad

    2018-01-01

    Based on a review of the literature on post-training transfer interventions, this paper offers a conceptual model that elucidates potential mechanisms through which two types of post-training transfer intervention (relapse prevention and proximal plus distal goal setting) influence the transfer of training. We explain how the application of…

  5. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries.

    PubMed

    Han, Bucong; Ma, Xiaohua; Zhao, Ruiying; Zhang, Jingxian; Wei, Xiaona; Liu, Xianghui; Liu, Xin; Zhang, Cunlong; Tan, Chunyan; Jiang, Yuyang; Chen, Yuzong

    2012-11-23

    Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. We evaluated support vector machines (SVM) as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33%) of 13.56M PubChem, 1,496 (0.89%) of 168 K MDDR, and 719 (7.73%) of 9,305 MDDR compounds similar to the known inhibitors. SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  6. Comparison of molecular breeding values based on within- and across-breed training in beef cattle.

    PubMed

    Kachman, Stephen D; Spangler, Matthew L; Bennett, Gary L; Hanford, Kathryn J; Kuehn, Larry A; Snelling, Warren M; Thallman, R Mark; Saatchi, Mahdi; Garrick, Dorian J; Schnabel, Robert D; Taylor, Jeremy F; Pollak, E John

    2013-08-16

    Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set.

  7. A selection of giant radio sources from NVSS

    DOE PAGES

    Proctor, D. D.

    2016-06-01

    Results of the application of pattern-recognition techniques to the problem of identifying giant radio sources (GRSs) from the data in the NVSS catalog are presented, and issues affecting the process are explored. Decision-tree pattern-recognition software was applied to training-set source pairs developed from known NVSS large-angular-size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRSs for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation ofmore » $$20^{\\prime} $$ and a minimum component area of 1.87 square arcmin at the 1.4 mJy level. The importance of comparing the resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 ± 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher-ranked sources were visually screened, and a table of over 1600 candidates, including morphological annotation, is presented. These systems include doubles and triples, wide-angle tail and narrow-angle tail, S- or Z-shaped systems, and core-jets and resolved cores. In conclusion, while some resolved-lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.« less

  8. Effects of draught load exercise and training on calcium homeostasis in horses.

    PubMed

    Vervuert, I; Coenen, M; Zamhöfer, J

    2005-01-01

    This study was conducted to investigate the effects of draught load exercise on calcium (Ca) homeostasis in young horses. Five 2-year-old untrained Standardbred horses were studied in a 4-month training programme. All exercise workouts were performed on a treadmill at a 6% incline and with a constant draught load of 40 kg (0.44 kN). The training programme started with a standardized exercise test (SET 1; six incremental steps of 5 min duration each, first step 1.38 m/s, stepwise increase by 0.56 m/s). A training programme was then initiated which consisted of low-speed exercise sessions (LSE; constant velocity at 1.67 m/s for 60 min, 48 training sessions in total). After the 16th and 48th LSE sessions, SETs (SET 2: middle of training period, SET 3: finishing training period) were performed again under the identical test protocol of SET 1. Blood samples for blood lactate, plasma total Ca, blood ionized calcium (Ca(2+)), blood pH, plasma inorganic phosphorus (P(i)) and plasma intact parathyroid hormone (PTH) were collected before, during and after SETs, and before and after the first, 16th, 32nd and 48th LSE sessions. During SETs there was a decrease in ionized Ca(2+) and a rise in lactate, P(i) and intact PTH. The LSEs resulted in an increase in pH and P(i), whereas lactate, ionized Ca(2+), total Ca and intact PTH were not affected. No changes in Ca metabolism were detected in the course of training. Results of this study suggest that the type of exercise influences Ca homeostasis and intact PTH response, but that these effects are not influenced in the course of the training period.

  9. TuMore: generation of synthetic brain tumor MRI data for deep learning based segmentation approaches

    NASA Astrophysics Data System (ADS)

    Lindner, Lydia; Pfarrkirchner, Birgit; Gsaxner, Christina; Schmalstieg, Dieter; Egger, Jan

    2018-03-01

    Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.

  10. Improving quantitative structure-activity relationship models using Artificial Neural Networks trained with dropout.

    PubMed

    Mendenhall, Jeffrey; Meiler, Jens

    2016-02-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.

  11. Improving Quantitative Structure-Activity Relationship Models using Artificial Neural Networks Trained with Dropout

    PubMed Central

    Mendenhall, Jeffrey; Meiler, Jens

    2016-01-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery (LB-CADD) pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both Enrichment false positive rate (FPR) and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22–46% over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods. PMID:26830599

  12. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  13. Specificity, Privacy, and Degeneracy in the CD4 T Cell Receptor Repertoire Following Immunization

    PubMed Central

    Sun, Yuxin; Best, Katharine; Cinelli, Mattia; Heather, James M.; Reich-Zeliger, Shlomit; Shifrut, Eric; Friedman, Nir; Shawe-Taylor, John; Chain, Benny

    2017-01-01

    T cells recognize antigen using a large and diverse set of antigen-specific receptors created by a complex process of imprecise somatic cell gene rearrangements. In response to antigen-/receptor-binding-specific T cells then divide to form memory and effector populations. We apply high-throughput sequencing to investigate the global changes in T cell receptor sequences following immunization with ovalbumin (OVA) and adjuvant, to understand how adaptive immunity achieves specificity. Each immunized mouse contained a predominantly private but related set of expanded CDR3β sequences. We used machine learning to identify common patterns which distinguished repertoires from mice immunized with adjuvant with and without OVA. The CDR3β sequences were deconstructed into sets of overlapping contiguous amino acid triplets. The frequencies of these motifs were used to train the linear programming boosting (LPBoost) algorithm LPBoost to classify between TCR repertoires. LPBoost could distinguish between the two classes of repertoire with accuracies above 80%, using a small subset of triplet sequences present at defined positions along the CDR3. The results suggest a model in which such motifs confer degenerate antigen specificity in the context of a highly diverse and largely private set of T cell receptors. PMID:28450864

  14. Variability Extraction and Synthesis via Multi-Resolution Analysis using Distribution Transformer High-Speed Power Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Mather, Barry A

    A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validationmore » is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.« less

  15. Team Training and Retention of Skills Acquired Above Real Time Training on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Ali, Syed Friasat; Guckenberger, Dutch; Crane, Peter; Rossi, Marcia; Williams, Mayard; Williams, Jason; Archer, Matt

    2000-01-01

    Above Real-Time Training (ARTT) is the training acquired on a real time simulator when it is modified to present events at a faster pace than normal. The experiments related to training of pilots performed by NASA engineers (Kolf in 1973, Hoey in 1976) and others (Guckenberger, Crane and their associates in the nineties) have shown that in comparison with the real time training (RTT), ARTT provides the following benefits: increased rate of skill acquisition, reduced simulator and aircraft training time, and more effective training for emergency procedures. Two sets of experiments have been performed; they are reported in professional conferences and the respective papers are included in this report. The retention of effects of ARTT has been studied in the first set of experiments and the use of ARTT as top-off training has been examined in the second set of experiments. In ARTT, the pace of events was 1.5 times the pace in RTT. In both sets of experiments, university students were trained to perform an aerial gunnery task. The training unit was equipped with a joystick and a throttle. The student acted as a nose gunner in a hypothetical two place attack aircraft. The flight simulation software was installed on a Universal Distributed Interactive Simulator platform supplied by ECC International of Orlando, Florida. In the first set of experiments, two training programs RTT or ART7 were used. Students were then tested in real time on more demanding scenarios: either immediately after training or two days later. The effects of ARTT did not decrease over a two day retention interval and ARTT was more time efficient than real time training. Therefore, equal test performance could be achieved with less clock-time spent in the simulator. In the second set of experiments three training programs RTT or ARTT or RARTT, were used. In RTT, students received 36 minutes of real time training. In ARTT, students received 36 minutes of above real time training. In RARTT, students received 18 minutes of real time training and 18 minutes of above real time training as top-off training. Students were then tested in real time on more demanding scenarios. The use of ARTT as top-off training after RTT offered better training than RTT alone or ARTT alone. It is, however, suggested that a similar experiment be conducted on a relatively more complex task with a larger sample of participants. Within the proposed duration of the research effort, the setting up of experiments and trial runs on using ARTT for team training were also scheduled but they could not be accomplished due to extra ordinary challenges faced in developing the required software configuration. Team training is, however, scheduled in a future study sponsored by NASA at Tuskegee University.

  16. Robust point matching via vector field consensus.

    PubMed

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  17. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uehara, Takeki, E-mail: takeki.uehara@shionogi.co.jp; Toxicogenomics Informatics Project, National Institute of Biomedical Innovation, 7-6-8 Asagi, Ibaraki, Osaka 567-0085; Minowa, Yohsuke

    2011-09-15

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificitymore » in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model enables us to detect genotoxic as well as non-genotoxic hepatocarcinogens.« less

  18. Randomized controlled trial of strength training in post-polio patients.

    PubMed

    Chan, K Ming; Amirjani, Nasim; Sumrain, Mae; Clarke, Anita; Strohschein, Fay J

    2003-03-01

    Many post-polio patients develop new muscle weakness decades after the initial illness. However, its mechanism and treatment are controversial. The purpose of this study was to test the hypotheses that: (1) after strength training, post-polio patients show strength improvement comparable to that seen in the healthy elderly; (2) such training does not have a deleterious effect on motor unit (MU) survival; and (3) part of the strength improvement is due to an increase in voluntary motor drive. After baseline measures including maximum voluntary contraction force, voluntary activation index, motor unit number estimate, and the tetanic tension of the thumb muscles had been determined, 10 post-polio patients with hand involvement were randomized to either the training or control group. The progressive resistance training program consisted of three sets of eight isometric contractions, three times weekly for 12 weeks. Seven healthy elderly were also randomized and trained in a similar manner. Changes in the baseline parameters were monitored once every 4 weeks throughout the training period. The trained post-polio patients showed a significant improvement in their strength (P < 0.05). The magnitude of gain was greater than that seen in the healthy elderly (mean +/- SE, 41 +/- 16% vs. 29 +/- 8%). The training did not adversely affect MU survival and the improvement was largely attributable to an increase in voluntary motor drive. We therefore conclude that moderate intensity strength training is safe and effective in post-polio patients.

  19. Systematic review of skills transfer after surgical simulation-based training.

    PubMed

    Dawe, S R; Pena, G N; Windsor, J A; Broeders, J A J L; Cregan, P C; Hewett, P J; Maddern, G J

    2014-08-01

    Simulation-based training assumes that skills are directly transferable to the patient-based setting, but few studies have correlated simulated performance with surgical performance. A systematic search strategy was undertaken to find studies published since the last systematic review, published in 2007. Inclusion of articles was determined using a predetermined protocol, independent assessment by two reviewers and a final consensus decision. Studies that reported on the use of surgical simulation-based training and assessed the transferability of the acquired skills to a patient-based setting were included. Twenty-seven randomized clinical trials and seven non-randomized comparative studies were included. Fourteen studies investigated laparoscopic procedures, 13 endoscopic procedures and seven other procedures. These studies provided strong evidence that participants who reached proficiency in simulation-based training performed better in the patient-based setting than their counterparts who did not have simulation-based training. Simulation-based training was equally as effective as patient-based training for colonoscopy, laparoscopic camera navigation and endoscopic sinus surgery in the patient-based setting. These studies strengthen the evidence that simulation-based training, as part of a structured programme and incorporating predetermined proficiency levels, results in skills transfer to the operative setting. © 2014 BJS Society Ltd. Published by John Wiley & Sons Ltd.

  20. RON is not a prognostic marker for resectable pancreatic cancer.

    PubMed

    Tactacan, Carole M; Chang, David K; Cowley, Mark J; Humphrey, Emily S; Wu, Jianmin; Gill, Anthony J; Chou, Angela; Nones, Katia; Grimmond, Sean M; Sutherland, Robert L; Biankin, Andrew V; Daly, Roger J

    2012-09-07

    The receptor tyrosine kinase RON exhibits increased expression during pancreatic cancer progression and promotes migration, invasion and gemcitabine resistance of pancreatic cancer cells in experimental models. However, the prognostic significance of RON expression in pancreatic cancer is unknown. RON expression was characterized in several large cohorts, including a prospective study, totaling 492 pancreatic cancer patients and relationships with patient outcome and clinico-pathologic variables were assessed. RON expression was associated with outcome in a training set, but this was not recapitulated in the validation set, nor was there any association with therapeutic responsiveness in the validation set or the prospective study. Although RON is implicated in pancreatic cancer progression in experimental models, and may constitute a therapeutic target, RON expression is not associated with prognosis or therapeutic responsiveness in resected pancreatic cancer.

  1. Pilot Study: The Role of Predeployment Ethics Training, Professional Ethics, and Religious Values on Naval Physicians' Ethical Decision Making.

    PubMed

    Gaidry, Alicia D; Hoehner, Paul J

    2016-08-01

    Military physicians serving overseas in cross-cultural settings face the challenge of meeting patients' needs and adhering to their personal and professional ethics while abiding by military obligations and duties. Predeployment ethics training for Naval physicians continues to be received in many forms, if received at all, and has largely not addressed their specific roles as medical providers in the military. This study explores the perceived effectiveness of predeployment ethics training received by Naval physicians. Additionally, it considers the contribution of different types of ethics training, religious values, and the professional ethics on Naval physicians' perceived ability to effectively manage ethically challenging scenarios while on deployment. A total of 49 Naval physicians participated in an online survey. 16.3% reported not receiving any form of ethics training before deployment. Of those that reported receiving ethics training before deployment, 92.7% found the ethics training received was helpful in some way while on deployment. While a medical school course was most contributory overall to their ability to handle ethically difficult situations while on deployment (70.7%), what most Naval physicians felt would help them better handle these types of situations would be a mandatory military training/military course (63.2%) or personal mentorship (57.9%). Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  2. Transfer of Mindfulness Training to the Work Setting: A Qualitative Study in a Health Care System.

    PubMed

    Lyddy, Christopher J; Schachter, Yotam; Reyer, Amy; Julliard, Kell

    2016-01-01

    Mindfulness training is now commonly offered as professional development for health care practitioners. Understanding how health care practitioners adopt mindfulness practices is limited, which poses a hurdle to the development of effective mindfulness training programs. To explore how health professionals use and perceive mindfulness practices at work, we conducted an exploratory qualitative study at a large multicomponent inner-city health system. All participants were self-selected health professionals who attended at least one mindfulness training. Training content was derived from the Tergar Meditation Community's nonsectarian Joy of Living program and focused on calming the mind using a flexible and broadly applicable approach. Transcribed interview data were examined using thematic analysis. Individuals receiving mindfulness training varied substantially in their subsequent adoption and utilization of these practices. Interviewees' experiences overall suggest that the workplace presents a relatively challenging but nonetheless viable environment for being mindful. Health care workers relied on more informal practice models than on formal meditation practice routines while at work. Factors reported by some individuals to inhibit effective mindfulness practice supported mindfulness for others, and overall displayed equivocal effects. Adoption and integration of mindfulness practices within the workplace are feasible yet vary significantly by practice type, situation, and the individual. Greater understanding of how individuals adopt workplace mindfulness training could improve future intervention research while clarifying optimal mindfulness training approaches.

  3. Differences in Physiological Responses to Interval Training in Cyclists With and Without Interval Training Experience

    PubMed Central

    Hebisz, Rafal; Borkowski, Jacek; Zatoń, Marek

    2016-01-01

    Abstract The aim of this study was to determine differences in glycolytic metabolite concentrations and work output in response to an all-out interval training session in 23 cyclists with at least 2 years of interval training experience (E) and those inexperienced (IE) in this form of training. The intervention involved subsequent sets of maximal intensity exercise on a cycle ergometer. Each set comprised four 30 s repetitions interspersed with 90 s recovery periods; sets were repeated when blood pH returned to 7.3. Measurements of post-exercise hydrogen (H+) and lactate ion (LA-) concentrations and work output were taken. The experienced cyclists performed significantly more sets of maximal efforts than the inexperienced athletes (5.8 ± 1.2 vs. 4.3 ± 0.9 sets, respectively). Work output decreased in each subsequent set in the IE group and only in the last set in the E group. Distribution of power output changed only in the E group; power decreased in the initial repetitions of set only to increase in the final repetitions. H+ concentration decreased in the third, penultimate, and last sets in the E group and in each subsequent set in the IE group. LA- decreased in the last set in both groups. In conclusion, the experienced cyclists were able to repeatedly induce elevated levels of lactic acidosis. Power output distribution changed with decreased acid–base imbalance. In this way, this group could compensate for a decreased anaerobic metabolism. The above factors allowed cyclists experienced in interval training to perform more sets of maximal exercise without a decrease in power output compared with inexperienced cyclists. PMID:28149346

  4. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  5. Returners Exhibit Greater Jumping Performance Improvements During a Peaking Phase Compared With New Players on a Volleyball Team.

    PubMed

    Bazyler, Caleb D; Mizuguchi, Satoshi; Kavanaugh, Ashley A; McMahon, John J; Comfort, Paul; Stone, Michael H

    2018-06-21

    To determine if jumping-performance changes during a peaking phase differed among returners and new players on a female collegiate volleyball team and to determine which variables best explained the variation in performance changes. Fourteen volleyball players were divided into 2 groups-returners (n = 7) and new players (n = 7)-who completed a 5-wk peaking phase prior to conference championships. Players were tested at baseline before the preseason on measures of the vastus lateralis cross-sectional area using ultrasonography, estimated back-squat 1-repetition maximum, countermovement jump height (JH), and relative peak power on a force platform. Jumping performance, rating of perceived exertion training load, and sets played were recorded weekly during the peaking phase. There were moderate to very large (P < .01, Glass Δ = 1.74) and trivial to very large (P = .07, Δ = 1.09) differences in JH and relative peak power changes in favor of returners over new players, respectively, during the peaking phase. Irrespective of group, 7 of 14 players achieved peak JH 2 wk after the initial overreach. The number of sets played (r = .78, P < .01) and the athlete's preseason relative 1-repetition maximum (r = .54, P = .05) were the strongest correlates of JH changes during the peaking phase. Returners achieved greater improvements in jumping performance during the peaking phase compared with new players, which may be explained by the returners' greater relative maximal strength, time spent competing, and training experience. Thus, volleyball and strength coaches should consider these factors when prescribing training during a peaking phase to ensure their players are prepared for important competitions.

  6. Detection and Evaluation of Spatio-Temporal Spike Patterns in Massively Parallel Spike Train Data with SPADE.

    PubMed

    Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M; Grün, Sonja

    2017-01-01

    Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis.

  7. Non-parametric transient classification using adaptive wavelets

    NASA Astrophysics Data System (ADS)

    Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.

    2015-11-01

    Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.

  8. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.

    PubMed

    Byers, Anna; Serences, John T

    2014-09-01

    Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.

  9. Detection and Evaluation of Spatio-Temporal Spike Patterns in Massively Parallel Spike Train Data with SPADE

    PubMed Central

    Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M.; Grün, Sonja

    2017-01-01

    Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis. PMID:28596729

  10. Cross-domain and multi-task transfer learning of deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny

    2018-02-01

    We propose a cross-domain, multi-task transfer learning framework to transfer knowledge learned from non-medical images by a deep convolutional neural network (DCNN) to medical image recognition task while improving the generalization by multi-task learning of auxiliary tasks. A first stage cross-domain transfer learning was initiated from ImageNet trained DCNN to mammography trained DCNN. 19,632 regions-of-interest (ROI) from 2,454 mass lesions were collected from two imaging modalities: digitized-screen film mammography (SFM) and full-field digital mammography (DM), and split into training and test sets. In the multi-task transfer learning, the DCNN learned the mass classification task simultaneously from the training set of SFM and DM. The best transfer network for mammography was selected from three transfer networks with different number of convolutional layers frozen. The performance of single-task and multitask transfer learning on an independent SFM test set in terms of the area under the receiver operating characteristic curve (AUC) was 0.78+/-0.02 and 0.82+/-0.02, respectively. In the second stage cross-domain transfer learning, a set of 12,680 ROIs from 317 mass lesions on DBT were split into validation and independent test sets. We first studied the data requirements for the first stage mammography trained DCNN by varying the mammography training data from 1% to 100% and evaluated its learning on the DBT validation set in inference mode. We found that the entire available mammography set provided the best generalization. The DBT validation set was then used to train only the last four fully connected layers, resulting in an AUC of 0.90+/-0.04 on the independent DBT test set.

  11. Quantitative structure-activity relationship modeling of rat acute toxicity by oral exposure.

    PubMed

    Zhu, Hao; Martin, Todd M; Ye, Lin; Sedykh, Alexander; Young, Douglas M; Tropsha, Alexander

    2009-12-01

    Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.

  12. Application of the Intuitionistic Fuzzy InterCriteria Analysis Method with Triples to a Neural Network Preprocessing Procedure

    PubMed Central

    Atanassova, Vassia; Sotirova, Evdokia; Doukovska, Lyubka; Bureva, Veselina; Mavrov, Deyan; Tomov, Jivko

    2017-01-01

    The approach of InterCriteria Analysis (ICA) was applied for the aim of reducing the set of variables on the input of a neural network, taking into account the fact that their large number increases the number of neurons in the network, thus making them unusable for hardware implementation. Here, for the first time, with the help of the ICA method, correlations between triples of the input parameters for training of the neural networks were obtained. In this case, we use the approach of ICA for data preprocessing, which may yield reduction of the total time for training the neural networks, hence, the time for the network's processing of data and images. PMID:28874908

  13. Stress inoculation training supported by physiology-driven adaptive virtual reality stimulation.

    PubMed

    Popović, Sinisa; Horvat, Marko; Kukolja, Davor; Dropuljić, Branimir; Cosić, Kresimir

    2009-01-01

    Significant proportion of psychological problems related to combat stress in recent large peacekeeping operations underscores importance of effective methods for strengthening the stress resistance of military personnel. Adaptive control of virtual reality (VR) stimulation, based on estimation of the subject's emotional state from physiological signals, may enhance existing stress inoculation training (SIT). Physiology-driven adaptive VR stimulation can tailor the progress of stressful stimuli delivery to the physiological characteristics of each individual, which is indicated for improvement in stress resistance. Therefore, following an overview of SIT and its applications in the military setting, generic concept of physiology-driven adaptive VR stimulation is presented in the paper. Toward the end of the paper, closed-loop adaptive control strategy applicable to SIT is outlined.

  14. Predicting who will drop out of nursing courses: a machine learning exercise.

    PubMed

    Moseley, Laurence G; Mead, Donna M

    2008-05-01

    The concepts of causation and prediction are different, and have different implications for practice. This distinction is applied here to studies of the problem of student attrition (although it is more widely applicable). Studies of attrition from nursing courses have tended to concentrate on causation, trying, largely unsuccessfully, to elicit what causes drop out. However, the problem may more fruitfully be cast in terms of predicting who is likely to drop out. One powerful method for attempting to make predictions is rule induction. This paper reports the use of the Answer Tree package from SPSS for that purpose. The main data set consisted of 3978 records on 528 nursing students, split into a training set and a test set. The source was standard university student records. The method obtained 84% sensitivity, 70% specificity, and 94% accuracy on previously unseen cases. The method requires large amounts of high quality data. When such data are available, rule induction offers a way to reduce attrition. It would be desirable to compare its results with those of predictions made by tutors using more informal conventional methods.

  15. Effects of hydraulic resistance circuit training on physical fitness components of potential relevance to +Gz tolerance.

    PubMed

    Jacobs, I; Bell, D G; Pope, J; Lee, W

    1987-08-01

    Recent studies carried out in the United States and Sweden have demonstrated that strength training can improve +Gz acceleration tolerance. Based on these findings, the Canadian Forces have introduced a training program for aircrew of high performance aircraft. This report describes the changes in physical fitness components considered relevant to +Gz tolerance after 12 weeks of training with this program. Prior to beginning training, 45 military personnel were tested, but only 20 completed a minimum of 24 training sessions. The following variables were measured in these 20 subjects before and after training: maximal strength of several large muscle groups during isokinetic contractions, maximal aerobic power and an endurance fitness index, maximal anaerobic power, anthropometric characteristics, and maximal expiratory pressure generated during exhalation. Training involved hydraulic resistance circuit training 2-4 times/week. The circuit consisted of 3 consecutive sets at each of 8 stations using Hydra-Gym equipment. The exercise:rest ratio was 20:40 s for the initial 4 training weeks and was then changed to 30:50. After training the changes in anthropometric measurements suggested that lean body mass was increased. Small, but significant, increases were also measured in muscle strength during bench press, biceps curls, squats, knee extension, and knee flexion. Neither maximal anaerobic power (i.e. muscular endurance) nor maximal expiratory pressure were changed after the training. Indices of endurance fitness were also increased in the present study. The relatively small increases in strength are probably due to the design of the exercise:rest ratio which resulted in improved strength and aerobic fitness.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. A neural network for noise correlation classification

    NASA Astrophysics Data System (ADS)

    Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas

    2018-02-01

    We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.

  17. Developing physical activity interventions for adults with spinal cord injury. Part 2: motivational counseling and peer-mediated interventions for people intending to be active.

    PubMed

    Latimer-Cheung, Amy E; Arbour-Nicitopoulos, Kelly P; Brawley, Lawrence R; Gray, Casey; Justine Wilson, A; Prapavessis, Harry; Tomasone, Jennifer R; Wolfe, Dalton L; Martin Ginis, Kathleen A

    2013-08-01

    The majority of people with spinal cord injury (SCI) do not engage in sufficient leisure-time physical activity (LTPA) to attain fitness benefits; however, many have good intentions to be active. This paper describes two pilot interventions targeting people with SCI who are insufficiently active but intend to be active (i.e., "intenders"). Study 1 examined the effects of a single, telephone-based counseling session on self-regulatory efficacy, intentions, and action plans for LTPA among seven men and women with paraplegia or tetraplegia. Study 2 examined the effects of a home-based strength-training session, delivered by a peer and a fitness trainer, on strength-training task self-efficacy, intentions, action plans, and behavior. Participants were 11 men and women with paraplegia. The counseling session (Study 1) yielded medium- to large-sized increases in participants' confidence to set LTPA goals and intentions to be active. The home visit (Study 2) produced medium- to large-sized increases in task self-efficacy, barrier self-efficacy, intentions, action planning, and strength-training behavior from baseline to 4 weeks after the visit. Study 1 findings provide preliminary evidence that a single counseling session can impact key determinants of LTPA among intenders with SCI. Study 2 findings demonstrate the potential utility of a peer-mediated, home-based strength training session for positively influencing social cognitions and strength-training behavior. Together, these studies provide evidence and resources for intervention strategies to promote LTPA among intenders with SCI, a population for whom LTPA interventions and resources are scarcely available.

  18. Prediction of a thermodynamic wave train from the monsoon to the Arctic following extreme rainfall events

    NASA Astrophysics Data System (ADS)

    Krishnamurti, T. N.; Kumar, Vinay

    2017-04-01

    This study addresses numerical prediction of atmospheric wave trains that provide a monsoonal link to the Arctic ice melt. The monsoonal link is one of several ways that heat is conveyed to the Arctic region. This study follows a detailed observational study on thermodynamic wave trains that are initiated by extreme rain events of the northern summer south Asian monsoon. These wave trains carry large values of heat content anomalies, heat transports and convergence of flux of heat. These features seem to be important candidates for the rapid melt scenario. This present study addresses numerical simulation of the extreme rains, over India and Pakistan, and the generation of thermodynamic wave trains, simulations of large heat content anomalies, heat transports along pathways and heat flux convergences, potential vorticity and the diabatic generation of potential vorticity. We compare model based simulation of many features such as precipitation, divergence and the divergent wind with those evaluated from the reanalysis fields. We have also examined the snow and ice cover data sets during and after these events. This modeling study supports our recent observational findings on the monsoonal link to the rapid Arctic ice melt of the Canadian Arctic. This numerical modeling suggests ways to interpret some recent episodes of rapid ice melts that may require a well-coordinated field experiment among atmosphere, ocean, ice and snow cover scientists. Such a well-coordinated study would sharpen our understanding of this one component of the ice melt, i.e. the monsoonal link, which appears to be fairly robust.

  19. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-07-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  20. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-04-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  1. How to successfully select and implement electronic health records (EHR) in small ambulatory practice settings.

    PubMed

    Lorenzi, Nancy M; Kouroubali, Angelina; Detmer, Don E; Bloomrosen, Meryl

    2009-02-23

    Adoption of EHRs by U.S. ambulatory practices has been slow despite the perceived benefits of their use. Most evaluations of EHR implementations in the literature apply to large practice settings. While there are similarities relating to EHR implementation in large and small practice settings, the authors argue that scale is an important differentiator. Focusing on small ambulatory practices, this paper outlines the benefits and barriers to EHR use in this setting, and provides a "field guide" for these practices to facilitate successful EHR implementation. The benefits of EHRs in ambulatory practices include improved patient care and office efficiency, and potential financial benefits. Barriers to EHRs include costs; lack of standardization of EHR products and the design of vendor systems for large practice environments; resistance to change; initial difficulty of system use leading to productivity reduction; and perceived accrual of benefits to society and payers rather than providers. The authors stress the need for developing a flexible change management strategy when introducing EHRs that is relevant to the small practice environment; the strategy should acknowledge the importance of relationship management and the role of individual staff members in helping the entire staff to manage change. Practice staff must create an actionable vision outlining realistic goals for the implementation, and all staff must buy into the project. The authors detail the process of implementing EHRs through several stages: decision, selection, pre-implementation, implementation, and post-implementation. They stress the importance of identifying a champion to serve as an advocate of the value of EHRs and provide direction and encouragement for the project. Other key activities include assessing and redesigning workflow; understanding financial issues; conducting training that is well-timed and meets the needs of practice staff; and evaluating the implementation process. The EHR implementation experience depends on a variety of factors including the technology, training, leadership, the change management process, and the individual character of each ambulatory practice environment. Sound processes must support both technical and personnel-related organizational components. Additional research is needed to further refine recommendations for the small physician practice and the nuances of specific medical specialties.

  2. Classification Based on Pruning and Double Covered Rule Sets for the Internet of Things Applications

    PubMed Central

    Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy. PMID:24511304

  3. Classification based on pruning and double covered rule sets for the internet of things applications.

    PubMed

    Li, Shasha; Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.

  4. Task Analysis of Tactical Leadership Skills for Bradley Infantry Fighting Vehicle Leaders

    DTIC Science & Technology

    1986-10-01

    The Bradley Leader Trainer is conceptualized as a device or set of de - vices that can be used to teach Bradley leaders to perform their full set of...experts. The task list was examined to de - termine critical training requirements, requirements for training device sup- port of this training, and...Functions/ j ITask | |Task | |Task | [Training j , To Further De - | ;Critical Train- | iTninir

  5. A Gold Standards Approach to Training Instructors to Evaluate Crew Performance

    NASA Technical Reports Server (NTRS)

    Baker, David P.; Dismukes, R. Key

    2003-01-01

    The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.

  6. Principles to Consider in Defining New Directions in Internal Medicine Training and Certification

    PubMed Central

    Turner, Barbara J; Centor, Robert M; Rosenthal, Gary E

    2006-01-01

    SGIM endoreses seven principles related to current thinking about internal medicine training: 1) internal medicine requires a full three years of residency training before subspecialization; 2) internal medicine residency programs must dramatically increase support for training in the ambulatory setting and offer equivalent opportunities for training in both inpatient and outpatient medicine; 3) in settings where adequate support and time are devoted to ambulatory training, the third year of residency could offer an opportunity to develop further expertise or mastery in a specific type or setting of care; 4) further certification in specific specialties within internal medicine requires the completion of an approved fellowship program; 5) areas of mastery in internal medicine can be demonstrated through modified board certification and recertification examinations; 6) certification processes throughout internal medicine should focus increasingly on demonstration of clinical competence through adherence to validated standards of care within and across practice settings; and 7) regardless of the setting in which General Internists practice, we should unite to promote the critical role that this specialty serves in patient care. PMID:16637826

  7. Optimization of Training Sets for Neural-Net Processing of Characteristic Patterns from Vibrating Solids

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2001-01-01

    Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.

  8. Multi-discipline resource inventory of soils, vegetation and geology

    NASA Technical Reports Server (NTRS)

    Simonson, G. H. (Principal Investigator); Paine, D. P.; Lawrence, R. D.; Norgren, J. A.; Pyott, W. Y.; Herzog, J. H.; Murray, R. J.; Rogers, R.

    1973-01-01

    The author has identified the following significant results. Computer classification of natural vegetation, in the vicinity of Big Summit Prairie, Crook County, Oregon was carried out using MSS digital data. Impure training sets, representing eleven vegetation types plus water, were selected from within the area to be classified. Close correlations were visually observed between vegetation types mapped from the large scale photographs and the computer classification of the ERTS data (Frame 1021-18151, 13 August 1972).

  9. Semantic Concept Discovery for Large Scale Zero Shot Event Detection

    DTIC Science & Technology

    2015-07-25

    sources and can be shared among many different events, including unseen ones. Based on this idea, events can be detected by inspect- ing the individual...2013]. Partial success along this vein has also been achieved in the zero-shot setting, e.g. [Habibian et al., 2014; Wu et al., 2014], but the...candle”, “birthday cake” and “applaud- ing”. Since concepts are shared among many different classes (events) and each concept classifier can be trained

  10. Engineering youth service system infrastructure: Hawaii's continued efforts at large-scale implementation through knowledge management strategies.

    PubMed

    Nakamura, Brad J; Mueller, Charles W; Higa-McMillan, Charmaine; Okamura, Kelsie H; Chang, Jaime P; Slavin, Lesley; Shimabukuro, Scott

    2014-01-01

    Hawaii's Child and Adolescent Mental Health Division provides a unique illustration of a youth public mental health system with a long and successful history of large-scale quality improvement initiatives. Many advances are linked to flexibly organizing and applying knowledge gained from the scientific literature and move beyond installing a limited number of brand-named treatment approaches that might be directly relevant only to a small handful of system youth. This article takes a knowledge-to-action perspective and outlines five knowledge management strategies currently under way in Hawaii. Each strategy represents one component of a larger coordinated effort at engineering a service system focused on delivering both brand-named treatment approaches and complimentary strategies informed by the evidence base. The five knowledge management examples are (a) a set of modular-based professional training activities for currently practicing therapists, (b) an outreach initiative for supporting youth evidence-based practices training at Hawaii's mental health-related professional programs, (c) an effort to increase consumer knowledge of and demand for youth evidence-based practices, (d) a practice and progress agency performance feedback system, and (e) a sampling of system-level research studies focused on understanding treatment as usual. We end by outlining a small set of lessons learned and a longer term vision for embedding these efforts into the system's infrastructure.

  11. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  12. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  13. The development of a patient-specific method for physiotherapy goal setting: a user-centered design.

    PubMed

    Stevens, Anita; Köke, Albère; van der Weijden, Trudy; Beurskens, Anna

    2018-08-01

    To deliver client-centered care, physiotherapists need to identify the patients' individual treatment goals. However, practical tools for involving patients in goal setting are lacking. The purpose of this study was to improve the frequently used Patient-Specific Complaints instrument in Dutch physiotherapy, and to develop it into a feasible method to improve physiotherapy goal setting. An iterative user-centered design was conducted in co-creation with the physiotherapists and patients, in three phases. Their needs and preferences were identified by means of group meetings and questionnaires. The new method was tested in several field tests in physiotherapy practices. Four main objectives for improvement were formulated: clear instructions for the administration procedure, targeted use across the physiotherapy process, client-activating communication skills, and a client-centered attitude of the physiotherapist. A theoretical goal-setting framework and elements of shared decision making were integrated into the new-called, Patient-Specific Goal-setting method, together with a practical training course. The user-centered approach resulted in a goal-setting method that is fully integrated in the physiotherapy process. The new goal-setting method contributes to a more structured approach to goal setting and enables patient participation and goal-oriented physiotherapy. Before large-scale implementation, its feasibility in physiotherapy practice needs to be investigated. Implications for rehabilitation Involving patients and physiotherapists in the development and testing of a goal-setting method, increases the likelihood of its feasibility in practice. The integration of a goal-setting method into the physiotherapy process offers the opportunity to focus more fully on the patient's goals. Patients should be informed about the aim of every step of the goal-setting process in order to increase their awareness and involvement. Training physiotherapists to use a patient-specific method for goal setting is crucial for a correct application.

  14. BUMPER v1.0: a Bayesian user-friendly model for palaeo-environmental reconstruction

    NASA Astrophysics Data System (ADS)

    Holden, Philip B.; Birks, H. John B.; Brooks, Stephen J.; Bush, Mark B.; Hwang, Grace M.; Matthews-Bird, Frazer; Valencia, Bryan G.; van Woesik, Robert

    2017-02-01

    We describe the Bayesian user-friendly model for palaeo-environmental reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring ˜ 2 s to build a 100-taxon model from a 100-site training set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training sets under ideal assumptions. We then use these to demonstrate the sensitivity of reconstructions to the characteristics of the training set, considering assemblage richness, taxon tolerances, and the number of training sites. We find that a useful guideline for the size of a training set is to provide, on average, at least 10 samples of each taxon. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. An identically configured model is used in each application, the only change being the input files that provide the training-set environment and taxon-count data. The performance of BUMPER is shown to be comparable with weighted average partial least squares (WAPLS) in each case. Additional artificial datasets are constructed with similar characteristics to the real data, and these are used to explore the reasons for the differing performances of the different training sets.

  15. Solar powered oxygen systems in remote health centers in Papua New Guinea: a large scale implementation effectiveness trial.

    PubMed

    Duke, Trevor; Hwaihwanje, Ilomo; Kaupa, Magdalynn; Karubi, Jonah; Panauwe, Doreen; Sa'avu, Martin; Pulsan, Francis; Prasad, Peter; Maru, Freddy; Tenambo, Henry; Kwaramb, Ambrose; Neal, Eleanor; Graham, Hamish; Izadnegahdar, Rasa

    2017-06-01

    Pneumonia is the largest cause of child deaths in Papua New Guinea (PNG), and hypoxaemia is the major complication causing death in childhood pneumonia, and hypoxaemia is a major factor in deaths from many other common conditions, including bronchiolitis, asthma, sepsis, malaria, trauma, perinatal problems, and obstetric emergencies. A reliable source of oxygen therapy can reduce mortality from pneumonia by up to 35%. However, in low and middle income countries throughout the world, improved oxygen systems have not been implemented at large scale in remote, difficult to access health care settings, and oxygen is often unavailable at smaller rural hospitals or district health centers which serve as the first point of referral for childhood illnesses. These hospitals are hampered by lack of reliable power, staff training and other basic services. We report the methodology of a large implementation effectiveness trial involving sustainable and renewable oxygen and power systems in 36 health facilities in remote rural areas of PNG. The methodology is a before-and after evaluation involving continuous quality improvement, and a health systems approach. We describe this model of implementation as the considerations and steps involved have wider implications in health systems in other countries. The implementation steps include: defining the criteria for where such an intervention is appropriate, assessment of power supplies and power requirements, the optimal design of a solar power system, specifications for oxygen concentrators and other oxygen equipment that will function in remote environments, installation logistics in remote settings, the role of oxygen analyzers in monitoring oxygen concentrator performance, the engineering capacity required to sustain a program at scale, clinical guidelines and training on oxygen equipment and the treatment of children with severe respiratory infection and other critical illnesses, program costs, and measurement of processes and outcomes to support continuous quality improvement. This study will evaluate the feasibility and sustainability issues in improving oxygen systems and providing reliable power on a large scale in remote rural settings in PNG, and the impact of this on child mortality from pneumonia over 3 years post-intervention. Taking a continuous quality improvement approach can be transformational for remote health services.

  16. [Exercise therapy as a therapeutic concept].

    PubMed

    Reer, R; Ziegler, M; Braumann, K-M

    2005-08-01

    Lack of exercise is a primary cause for today's level of morbidity and mortality in the Western world. Thus, exercise as a therapeutic modality has an important role. Beneficial effects of exercise have been extensively documented, specifically in primary and secondary prevention of coronary heart disease (CHD), diabetes mellitus, hypertension, disorders of fat metabolism, heart insufficiency, cancer, etc. A regular (at least 3 x per week) endurance training program of 30-40 min duration at an intensity of 65-70% of VO(2)max involving large muscle groups is recommended. The specific exercise activity can also positively affect individuals with orthopedic disease patterns, i.e., osteoporosis, back pain, postoperative rehabilitation, etc. Endurance strength training in the form of sequential training involving approx. 8-10 different exercises for the most important muscle groups 2 x per week is a suitable exercise therapy. One to three sets with 8-12 repetitions per exercise should be performed until volitional exhaustion of the trained muscle groups among healthy adults and 15-20 repetitions among older and cardiac patients. Apart from a positive effect on the locomotor system, this type of strength training has positive effects on CHD, diabetes mellitus, and cancer.

  17. Reinforced dynamics for enhanced sampling in large atomic and molecular systems

    NASA Astrophysics Data System (ADS)

    Zhang, Linfeng; Wang, Han; E, Weinan

    2018-03-01

    A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.

  18. Structural similarity based kriging for quantitative structure activity and property relationship modeling.

    PubMed

    Teixeira, Ana L; Falcao, Andre O

    2014-07-28

    Structurally similar molecules tend to have similar properties, i.e. closer molecules in the molecular space are more likely to yield similar property values while distant molecules are more likely to yield different values. Based on this principle, we propose the use of a new method that takes into account the high dimensionality of the molecular space, predicting chemical, physical, or biological properties based on the most similar compounds with measured properties. This methodology uses ordinary kriging coupled with three different molecular similarity approaches (based on molecular descriptors, fingerprints, and atom matching) which creates an interpolation map over the molecular space that is capable of predicting properties/activities for diverse chemical data sets. The proposed method was tested in two data sets of diverse chemical compounds collected from the literature and preprocessed. One of the data sets contained dihydrofolate reductase inhibition activity data, and the second molecules for which aqueous solubility was known. The overall predictive results using kriging for both data sets comply with the results obtained in the literature using typical QSPR/QSAR approaches. However, the procedure did not involve any type of descriptor selection or even minimal information about each problem, suggesting that this approach is directly applicable to a large spectrum of problems in QSAR/QSPR. Furthermore, the predictive results improve significantly with the similarity threshold between the training and testing compounds, allowing the definition of a confidence threshold of similarity and error estimation for each case inferred. The use of kriging for interpolation over the molecular metric space is independent of the training data set size, and no reparametrizations are necessary when more compounds are added or removed from the set, and increasing the size of the database will consequentially improve the quality of the estimations. Finally it is shown that this model can be used for checking the consistency of measured data and for guiding an extension of the training set by determining the regions of the molecular space for which new experimental measurements could be used to maximize the model's predictive performance.

  19. Analysis of classifiers performance for classification of potential microcalcification

    NASA Astrophysics Data System (ADS)

    M. N., Arun K.; Sheshadri, H. S.

    2013-07-01

    Breast cancer is a significant public health problem in the world. According to the literature early detection improve breast cancer prognosis. Mammography is a screening tool used for early detection of breast cancer. About 10-30% cases are missed during the routine check as it is difficult for the radiologists to make accurate analysis due to large amount of data. The Microcalcifications (MCs) are considered to be important signs of breast cancer. It has been reported in literature that 30% - 50% of breast cancer detected radio graphically show MCs on mammograms. Histologic examinations report 62% to 79% of breast carcinomas reveals MCs. MC are tiny, vary in size, shape, and distribution, and MC may be closely connected to surrounding tissues. There is a major challenge using the traditional classifiers in the classification of individual potential MCs as the processing of mammograms in appropriate stage generates data sets with an unequal amount of information for both classes (i.e., MC, and Not-MC). Most of the existing state-of-the-art classification approaches are well developed by assuming the underlying training set is evenly distributed. However, they are faced with a severe bias problem when the training set is highly imbalanced in distribution. This paper addresses this issue by using classifiers which handle the imbalanced data sets. In this paper, we also compare the performance of classifiers which are used in the classification of potential MC.

  20. Random Access Memories: A New Paradigm for Target Detection in High Resolution Aerial Remote Sensing Images.

    PubMed

    Zou, Zhengxia; Shi, Zhenwei

    2018-03-01

    We propose a new paradigm for target detection in high resolution aerial remote sensing images under small target priors. Previous remote sensing target detection methods frame the detection as learning of detection model + inference of class-label and bounding-box coordinates. Instead, we formulate it from a Bayesian view that at inference stage, the detection model is adaptively updated to maximize its posterior that is determined by both training and observation. We call this paradigm "random access memories (RAM)." In this paradigm, "Memories" can be interpreted as any model distribution learned from training data and "random access" means accessing memories and randomly adjusting the model at detection phase to obtain better adaptivity to any unseen distribution of test data. By leveraging some latest detection techniques e.g., deep Convolutional Neural Networks and multi-scale anchors, experimental results on a public remote sensing target detection data set show our method outperforms several other state of the art methods. We also introduce a new data set "LEarning, VIsion and Remote sensing laboratory (LEVIR)", which is one order of magnitude larger than other data sets of this field. LEVIR consists of a large set of Google Earth images, with over 22 k images and 10 k independently labeled targets. RAM gives noticeable upgrade of accuracy (an mean average precision improvement of 1% ~ 4%) of our baseline detectors with acceptable computational overhead.

  1. Comparison of molecular breeding values based on within- and across-breed training in beef cattle

    PubMed Central

    2013-01-01

    Background Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Methods Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. Results With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Conclusions Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set. PMID:23953034

  2. Effects of a structured 20-session slow-cortical-potential-based neurofeedback program on attentional performance in children and adolescents with attention-deficit hyperactivity disorder: retrospective analysis of an open-label pilot-approach and 6-month follow-up.

    PubMed

    Albrecht, Johanna S; Bubenzer-Busch, Sarah; Gallien, Anne; Knospe, Eva Lotte; Gaber, Tilman J; Zepf, Florian D

    2017-01-01

    The aim of this approach was to conduct a structured electroencephalography-based neurofeedback training program for children and adolescents with attention-deficit hyperactivity disorder (ADHD) using slow cortical potentials with an intensive first (almost daily sessions) and second phase of training (two sessions per week) and to assess aspects of attentional performance. A total of 24 young patients with ADHD participated in the 20-session training program. During phase I of training (2 weeks, 10 sessions), participants were trained on weekdays. During phase II, neurofeedback training occurred twice per week (5 weeks). The patients' inattention problems were measured at three assessment time points before (pre, T0) and after (post, T1) the training and at a 6-month follow-up (T2); the assessments included neuropsychological tests (Alertness and Divided Attention subtests of the Test for Attentional Performance; Sustained Attention Dots and Shifting Attentional Set subtests of the Amsterdam Neuropsychological Test) and questionnaire data (inattention subscales of the so-called Fremdbeurteilungsbogen für Hyperkinetische Störungen and Child Behavior Checklist/4-18 [CBCL/4-18]). All data were analyzed retrospectively. The mean auditive reaction time in a Divided Attention task decreased significantly from T0 to T1 (medium effect), which was persistent over time and also found for a T0-T2 comparison (larger effects). In the Sustained Attention Dots task, the mean reaction time was reduced from T0-T1 and T1-T2 (small effects), whereas in the Shifting Attentional Set task, patients were able to increase the number of trials from T1-T2 and significantly diminished the number of errors (T1-T2 & T0-T2, large effects). First positive but very small effects and preliminary results regarding different parameters of attentional performance were detected in young individuals with ADHD. The limitations of the obtained preliminary data are the rather small sample size, the lack of a control group/a placebo condition and the open-label approach because of the clinical setting and retrospective analysis. The value of the current approach lies in providing pilot data for future studies involving larger samples.

  3. Evolutionary and Neural Computing Based Decision Support System for Disease Diagnosis from Clinical Data Sets in Medical Practice.

    PubMed

    Sudha, M

    2017-09-27

    As a recent trend, various computational intelligence and machine learning approaches have been used for mining inferences hidden in the large clinical databases to assist the clinician in strategic decision making. In any target data the irrelevant information may be detrimental, causing confusion for the mining algorithm and degrades the prediction outcome. To address this issue, this study attempts to identify an intelligent approach to assist disease diagnostic procedure using an optimal set of attributes instead of all attributes present in the clinical data set. In this proposed Application Specific Intelligent Computing (ASIC) decision support system, a rough set based genetic algorithm is employed in pre-processing phase and a back propagation neural network is applied in training and testing phase. ASIC has two phases, the first phase handles outliers, noisy data, and missing values to obtain a qualitative target data to generate appropriate attribute reduct sets from the input data using rough computing based genetic algorithm centred on a relative fitness function measure. The succeeding phase of this system involves both training and testing of back propagation neural network classifier on the selected reducts. The model performance is evaluated with widely adopted existing classifiers. The proposed ASIC system for clinical decision support has been tested with breast cancer, fertility diagnosis and heart disease data set from the University of California at Irvine (UCI) machine learning repository. The proposed system outperformed the existing approaches attaining the accuracy rate of 95.33%, 97.61%, and 93.04% for breast cancer, fertility issue and heart disease diagnosis.

  4. Two birds with one stone: experiences of combining clinical and research training in addiction medicine.

    PubMed

    Klimas, J; McNeil, R; Ahamad, K; Mead, A; Rieb, L; Cullen, W; Wood, E; Small, W

    2017-01-23

    Despite a large evidence-base upon which to base clinical practice, most health systems have not combined the training of healthcare providers in addiction medicine and research. As such, addiction care is often lacking, or not based on evidence or best practices. We undertook a qualitative study to assess the experiences of physicians who completed a clinician-scientist training programme in addiction medicine within a hospital setting. We interviewed physicians from the St. Paul's Hospital Goldcorp Addiction Medicine Fellowship and learners from the hospital's academic Addiction Medicine Consult Team in Vancouver, Canada (N = 26). They included psychiatrists, internal medicine and family medicine physicians, faculty, mentors, medical students and residents. All received both addiction medicine and research training. Drawing on Kirkpatrick's model of evaluating training programmes, we analysed the interviews thematically using qualitative data analysis software (Nvivo 10). We identified five themes relating to learning experience that were influential: (i) attitude, (ii) knowledge, (iii) skill, (iv) behaviour and (v) patient outcome. The presence of a supportive learning environment, flexibility in time lines, highly structured rotations, and clear guidance regarding development of research products facilitated clinician-scientist training. Competing priorities, including clinical and family responsibilities, hindered training. Combined training in addiction medicine and research is feasible and acceptable for current doctors and physicians in training. However, there are important barriers to overcome and improved understanding of the experience of addiction physicians in the clinician-scientist track is required to improve curricula and research productivity.

  5. Target discrimination method for SAR images based on semisupervised co-training

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Du, Lan; Dai, Hui

    2018-01-01

    Synthetic aperture radar (SAR) target discrimination is usually performed in a supervised manner. However, supervised methods for SAR target discrimination may need lots of labeled training samples, whose acquirement is costly, time consuming, and sometimes impossible. This paper proposes an SAR target discrimination method based on semisupervised co-training, which utilizes a limited number of labeled samples and an abundant number of unlabeled samples. First, Lincoln features, widely used in SAR target discrimination, are extracted from the training samples and partitioned into two sets according to their physical meanings. Second, two support vector machine classifiers are iteratively co-trained with the extracted two feature sets based on the co-training algorithm. Finally, the trained classifiers are exploited to classify the test data. The experimental results on real SAR images data not only validate the effectiveness of the proposed method compared with the traditional supervised methods, but also demonstrate the superiority of co-training over self-training, which only uses one feature set.

  6. Global teaching and training initiatives for emerging cohort studies

    PubMed Central

    Paulus, Jessica K.; Santoyo-Vistrain, Rocío; Havelick, David; Cohen, Amy; Kalyesubula, Robert; Ajayi, Ikeoluwapo O.; Mattsson, Jens G.; Adami, Hans-Olov; Dalal, Shona

    2015-01-01

    A striking disparity exists across the globe, with essentially no large-scale longitudinal studies ongoing in regions that will be significantly affected by the oncoming non-communicable disease epidemic. The successful implementation of cohort studies in most low-resource research environments presents unique challenges that may be aided by coordinated training programs. Leaders of emerging cohort studies attending the First World Cohort Integration Workshop were surveyed about training priorities, unmet needs and potential cross-cohort solutions to these barriers through an electronic pre-workshop questionnaire and focus groups. Cohort studies representing India, Mexico, Nigeria, South Africa, Sweden, Tanzania and Uganda described similar training needs, including on-the-job training, data analysis software instruction, and database and bio-bank management. A lack of funding and protected time for training activities were commonly identified constraints. Proposed solutions include a collaborative cross-cohort teaching platform with web-based content and interactive teaching methods for a range of research personnel. An international network for research mentorship and idea exchange, and modifying the graduate thesis structure were also identified as key initiatives. Cross-cohort integrated educational initiatives will efficiently meet shared needs, catalyze the development of emerging cohorts, speed closure of the global disparity in cohort research, and may fortify scientific capacity development in low-resource settings. PMID:23856451

  7. Organizational structure, team process, and future directions of interprofessional health care teams.

    PubMed

    Cole, Kenneth D; Waite, Martha S; Nichols, Linda O

    2003-01-01

    For a nationwide Geriatric Interdisciplinary Team Training (GITT) program evaluation of 8 sites and 26 teams, team evaluators developed a quantitative and qualitative team observation scale (TOS), examining structure, process, and outcome, with specific focus on the training function. Qualitative data provided an important expansion of quantitative data, highlighting positive effects that were not statistically significant, such as role modeling and training occurring within the clinical team. Qualitative data could also identify "too much" of a coded variable, such as time spent in individual team members' assessments and treatment plans. As healthcare organizations have increasing demands for productivity and changing reimbursement, traditional models of teamwork, with large teams and structured meetings, may no longer be as functional as they once were. To meet these constraints and to train students in teamwork, teams of the future will have to make choices, from developing and setting specific models to increasing the use of information technology to create virtual teams. Both quantitative and qualitative data will be needed to evaluate these new types of teams and the important outcomes they produce.

  8. Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation

    NASA Astrophysics Data System (ADS)

    Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.

    2017-12-01

    One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).

  9. The Impact of Protein Structure and Sequence Similarity on the Accuracy of Machine-Learning Scoring Functions for Binding Affinity Prediction

    PubMed Central

    Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J.

    2018-01-01

    It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future. PMID:29538331

  10. The Impact of Protein Structure and Sequence Similarity on the Accuracy of Machine-Learning Scoring Functions for Binding Affinity Prediction.

    PubMed

    Li, Hongjian; Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J

    2018-03-14

    It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future.

  11. Coping with challenging behaviours of children with autism: effectiveness of brief training workshop for frontline staff in special education settings.

    PubMed

    Ling, C Y M; Mak, W W S

    2012-03-01

    The present study examined the effectiveness of three staff training elements: psychoeducation (PE) on autism, introduction of functional behavioural analysis (FBA) and emotional management (EM), on the reaction of challenging behaviours for frontline staff towards children with autism in Hong Kong special education settings. A sample of 311 frontline staff in educational settings was recruited to one of the three conditions: control, PE-FBA and PE-FBA-EM groups. A total of 175 participants completed all three sets of questionnaires during pre-training, immediate post-training and 1-month follow-up. Findings showed that the one-session staff training workshop increased staff knowledge of autism and perceived efficacy but decrease helping behavioural intention. In spite of the limited effectiveness of a one-session staff training workshop, continued staff training is still necessary for the improvement of service quality. Further exploration on how to change emotion response of staff is important. © 2011 The Authors. Journal of Intellectual Disability Research © 2011 Blackwell Publishing Ltd.

  12. Bamboo Classification Using WorldView-2 Imagery of Giant Panda Habitat in a Large Shaded Area in Wolong, Sichuan Province, China.

    PubMed

    Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia

    2016-11-22

    This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k -nearest neighbor ( k -NN) method produced the greatest accuracy. A geostatistically-weighted k -NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer's and user's accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas.

  13. Learning to rank image tags with limited training examples.

    PubMed

    Songhe Feng; Zheyun Feng; Rong Jin

    2015-04-01

    With an increasing number of images that are available in social media, image annotation has emerged as an important research topic due to its application in image matching and retrieval. Most studies cast image annotation into a multilabel classification problem. The main shortcoming of this approach is that it requires a large number of training images with clean and complete annotations in order to learn a reliable model for tag prediction. We address this limitation by developing a novel approach that combines the strength of tag ranking with the power of matrix recovery. Instead of having to make a binary decision for each tag, our approach ranks tags in the descending order of their relevance to the given image, significantly simplifying the problem. In addition, the proposed method aggregates the prediction models for different tags into a matrix, and casts tag ranking into a matrix recovery problem. It introduces the matrix trace norm to explicitly control the model complexity, so that a reliable prediction model can be learned for tag ranking even when the tag space is large and the number of training images is limited. Experiments on multiple well-known image data sets demonstrate the effectiveness of the proposed framework for tag ranking compared with the state-of-the-art approaches for image annotation and tag ranking.

  14. AO Distal Radius Fracture Classification: Global Perspective on Observer Agreement.

    PubMed

    Jayakumar, Prakash; Teunis, Teun; Giménez, Beatriz Bravo; Verstreken, Frederik; Di Mascio, Livio; Jupiter, Jesse B

    2017-02-01

    Background  The primary objective of this study was to test interobserver reliability when classifying fractures by consensus by AO types and groups among a large international group of surgeons. Secondarily, we assessed the difference in inter- and intraobserver agreement of the AO classification in relation to geographical location, level of training, and subspecialty. Methods  A randomized set of radiographic and computed tomographic images from a consecutive series of 96 distal radius fractures (DRFs), treated between October 2010 and April 2013, was classified using an electronic web-based portal by an invited group of participants on two occasions. Results  Interobserver reliability was substantial when classifying AO type A fractures but fair and moderate for type B and C fractures, respectively. No difference was observed by location, except for an apparent difference between participants from India and Australia classifying type B fractures. No statistically significant associations were observed comparing interobserver agreement by level of training and no differences were shown comparing subspecialties. Intra-rater reproducibility was "substantial" for fracture types and "fair" for fracture groups with no difference accounting for location, training level, or specialty. Conclusion  Improved definition of reliability and reproducibility of this classification may be achieved using large international groups of raters, empowering decision making on which system to utilize. Level of Evidence  Level III.

  15. AO Distal Radius Fracture Classification: Global Perspective on Observer Agreement

    PubMed Central

    Jayakumar, Prakash; Teunis, Teun; Giménez, Beatriz Bravo; Verstreken, Frederik; Di Mascio, Livio; Jupiter, Jesse B.

    2016-01-01

    Background The primary objective of this study was to test interobserver reliability when classifying fractures by consensus by AO types and groups among a large international group of surgeons. Secondarily, we assessed the difference in inter- and intraobserver agreement of the AO classification in relation to geographical location, level of training, and subspecialty. Methods A randomized set of radiographic and computed tomographic images from a consecutive series of 96 distal radius fractures (DRFs), treated between October 2010 and April 2013, was classified using an electronic web-based portal by an invited group of participants on two occasions. Results Interobserver reliability was substantial when classifying AO type A fractures but fair and moderate for type B and C fractures, respectively. No difference was observed by location, except for an apparent difference between participants from India and Australia classifying type B fractures. No statistically significant associations were observed comparing interobserver agreement by level of training and no differences were shown comparing subspecialties. Intra-rater reproducibility was “substantial” for fracture types and “fair” for fracture groups with no difference accounting for location, training level, or specialty. Conclusion Improved definition of reliability and reproducibility of this classification may be achieved using large international groups of raters, empowering decision making on which system to utilize. Level of Evidence Level III PMID:28119795

  16. Training artificial neural networks directly on the concordance index for censored data using genetic algorithms.

    PubMed

    Kalderstam, Jonas; Edén, Patrik; Bendahl, Pär-Ola; Strand, Carina; Fernö, Mårten; Ohlsson, Mattias

    2013-06-01

    The concordance index (c-index) is the standard way of evaluating the performance of prognostic models in the presence of censored data. Constructing prognostic models using artificial neural networks (ANNs) is commonly done by training on error functions which are modified versions of the c-index. Our objective was to demonstrate the capability of training directly on the c-index and to evaluate our approach compared to the Cox proportional hazards model. We constructed a prognostic model using an ensemble of ANNs which were trained using a genetic algorithm. The individual networks were trained on a non-linear artificial data set divided into a training and test set both of size 2000, where 50% of the data was censored. The ANNs were also trained on a data set consisting of 4042 patients treated for breast cancer spread over five different medical studies, 2/3 used for training and 1/3 used as a test set. A Cox model was also constructed on the same data in both cases. The two models' c-indices on the test sets were then compared. The ranking performance of the models is additionally presented visually using modified scatter plots. Cross validation on the cancer training set did not indicate any non-linear effects between the covariates. An ensemble of 30 ANNs with one hidden neuron was therefore used. The ANN model had almost the same c-index score as the Cox model (c-index=0.70 and 0.71, respectively) on the cancer test set. Both models identified similarly sized low risk groups with at most 10% false positives, 49 for the ANN model and 60 for the Cox model, but repeated bootstrap runs indicate that the difference was not significant. A significant difference could however be seen when applied on the non-linear synthetic data set. In that case the ANN ensemble managed to achieve a c-index score of 0.90 whereas the Cox model failed to distinguish itself from the random case (c-index=0.49). We have found empirical evidence that ensembles of ANN models can be optimized directly on the c-index. Comparison with a Cox model indicates that near identical performance is achieved on a real cancer data set while on a non-linear data set the ANN model is clearly superior. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Effects of training set selection on pain recognition via facial expressions

    NASA Astrophysics Data System (ADS)

    Shier, Warren A.; Yanushkevich, Svetlana N.

    2016-07-01

    This paper presents an approach to pain expression classification based on Gabor energy filters with Support Vector Machines (SVMs), followed by analyzing the effects of training set variations on the systems classification rate. This approach is tested on the UNBC-McMaster Shoulder Pain Archive, which consists of spontaneous pain images, hand labelled using the Prkachin and Solomon Pain Intensity scale. In this paper, the subjects pain intensity level has been quantized into three disjoint groups: no pain, weak pain and strong pain. The results of experiments show that Gabor energy filters with SVMs provide comparable or better results to previous filter- based pain recognition methods, with precision rates of 74%, 30% and 78% for no pain, weak pain and strong pain, respectively. The study of effects of intra-class skew, or changing the number of images per subject, show that both completely removing and over-representing poor quality subjects in the training set has little effect on the overall accuracy of the system. This result suggests that poor quality subjects could be removed from the training set to save offline training time and that SVM is robust not only to outliers in training data, but also to significant amounts of poor quality data mixed into the training sets.

  18. International standards for programmes of training in intensive care medicine in Europe.

    PubMed

    2011-03-01

    To develop internationally harmonised standards for programmes of training in intensive care medicine (ICM). Standards were developed by using consensus techniques. A nine-member nominal group of European intensive care experts developed a preliminary set of standards. These were revised and refined through a modified Delphi process involving 28 European national coordinators representing national training organisations using a combination of moderated discussion meetings, email, and a Web-based tool for determining the level of agreement with each proposed standard, and whether the standard could be achieved in the respondent's country. The nominal group developed an initial set of 52 possible standards which underwent four iterations to achieve maximal consensus. All national coordinators approved a final set of 29 standards in four domains: training centres, training programmes, selection of trainees, and trainers' profiles. Only three standards were considered immediately achievable by all countries, demonstrating a willingness to aspire to quality rather than merely setting a minimum level. Nine proposed standards which did not achieve full consensus were identified as potential candidates for future review. This preliminary set of clearly defined and agreed standards provides a transparent framework for assuring the quality of training programmes, and a foundation for international harmonisation and quality improvement of training in ICM.

  19. Acute effects of verbal feedback on upper-body performance in elite athletes.

    PubMed

    Argus, Christos K; Gill, Nicholas D; Keogh, Justin Wl; Hopkins, Will G

    2011-12-01

    Argus, CK, Gill, ND, Keogh, JWL, and Hopkins, WG. Acute effects of verbal feedback on upper-body performance in elite athletes. J Strength Cond Res 25(12): 3282-3287, 2011-Improved training quality has the potential to enhance training adaptations. Previous research suggests that receiving feedback improves single-effort maximal strength and power tasks, but whether quality of a training session with repeated efforts can be improved remains unclear. The purpose of this investigation was to determine the effects of verbal feedback on upper-body performance in a resistance training session consisting of multiple sets and repetitions in well-trained athletes. Nine elite rugby union athletes were assessed using the bench throw exercise on 4 separate occasions each separated by 7 days. Each athlete completed 2 sessions consisting of 3 sets of 4 repetitions of the bench throw with feedback provided after each repetition and 2 identical sessions where no feedback was provided after each repetition. When feedback was received, there was a small increase of 1.8% (90% confidence limits, ±2.7%) and 1.3% (±0.7%) in mean peak power and velocity when averaged over the 3 sets. When individual sets were compared, there was a tendency toward the improvements in mean peak power being greater in the second and third sets. These results indicate that providing verbal feedback produced acute improvements in upper-body power output of well-trained athletes. The benefits of feedback may be greatest in the latter sets of training and could improve training quality and result in greater long-term adaptation.

  20. Satisfaction with web-based training in an integrated healthcare delivery network: do age, education, computer skills and attitudes matter?

    PubMed Central

    Atreja, Ashish; Mehta, Neil B; Jain, Anil K; Harris, CM; Ishwaran, Hemant; Avital, Michel; Fishleder, Andrew J

    2008-01-01

    Background Healthcare institutions spend enormous time and effort to train their workforce. Web-based training can potentially streamline this process. However the deployment of web-based training in a large-scale setting with a diverse healthcare workforce has not been evaluated. The aim of this study was to evaluate the satisfaction of healthcare professionals with web-based training and to determine the predictors of such satisfaction including age, education status and computer proficiency. Methods Observational, cross-sectional survey of healthcare professionals from six hospital systems in an integrated delivery network. We measured overall satisfaction to web-based training and response to survey items measuring Website Usability, Course Usefulness, Instructional Design Effectiveness, Computer Proficiency and Self-learning Attitude. Results A total of 17,891 healthcare professionals completed the web-based training on HIPAA Privacy Rule; and of these, 13,537 completed the survey (response rate 75.6%). Overall course satisfaction was good (median, 4; scale, 1 to 5) with more than 75% of the respondents satisfied with the training (rating 4 or 5) and 65% preferring web-based training over traditional instructor-led training (rating 4 or 5). Multivariable ordinal regression revealed 3 key predictors of satisfaction with web-based training: Instructional Design Effectiveness, Website Usability and Course Usefulness. Demographic predictors such as gender, age and education did not have an effect on satisfaction. Conclusion The study shows that web-based training when tailored to learners' background, is perceived as a satisfactory mode of learning by an interdisciplinary group of healthcare professionals, irrespective of age, education level or prior computer experience. Future studies should aim to measure the long-term outcomes of web-based training. PMID:18922178

  1. Effects of hippocampal lesions on the monkey's ability to learn large sets of object-place associations.

    PubMed

    Belcher, Annabelle M; Harrington, Rebecca A; Malkova, Ludise; Mishkin, Mortimer

    2006-01-01

    Earlier studies found that recognition memory for object-place associations was impaired in patients with relatively selective hippocampal damage (Vargha-Khadem et al., Science 1997; 277:376-380), but was unaffected after selective hippocampal lesions in monkeys (Malkova and Mishkin, J Neurosci 2003; 23:1956-1965). A potentially important methodological difference between the two studies is that the patients were required to remember a set of 20 object-place associations for several minutes, whereas the monkeys had to remember only two such associations at a time, and only for a few seconds. To approximate more closely the task given to the patients, we trained monkeys on several successive sets of 10 object-place pairs each, with each set requiring learning across days. Despite the increased associative memory demands, monkeys given hippocampal lesions were unimpaired relative to their unoperated controls, suggesting that differences other than set size and memory duration underlie the different outcomes in the human and animal studies. (c) 2005 Wiley-Liss, Inc.

  2. Top 10 Lessons Learned from Electronic Medical Record Implementation in a Large Academic Medical Center.

    PubMed

    Rizer, Milisa K; Kaufman, Beth; Sieck, Cynthia J; Hefner, Jennifer L; McAlearney, Ann Scheck

    2015-01-01

    Electronic medical record (EMR) implementation efforts face many challenges, including individual and organizational barriers and concerns about loss of productivity during the process. These issues may be particularly complex in large and diverse settings with multiple specialties providing inpatient and outpatient care. This case report provides an example of a successful EMR implementation that emphasizes the importance of flexibility and adaptability on the part of the implementation team. It also presents the top 10 lessons learned from this EMR implementation in a large midwestern academic medical center. Included are five overarching lessons related to leadership, initial approach, training, support, and optimization as well as five lessons related to the EMR system itself that are particularly important elements of a successful implementation.

  3. Top 10 Lessons Learned from Electronic Medical Record Implementation in a Large Academic Medical Center

    PubMed Central

    Rizer, Milisa K.; Kaufman, Beth; Sieck, Cynthia J.; Hefner, Jennifer L.; McAlearney, Ann Scheck

    2015-01-01

    Electronic medical record (EMR) implementation efforts face many challenges, including individual and organizational barriers and concerns about loss of productivity during the process. These issues may be particularly complex in large and diverse settings with multiple specialties providing inpatient and outpatient care. This case report provides an example of a successful EMR implementation that emphasizes the importance of flexibility and adaptability on the part of the implementation team. It also presents the top 10 lessons learned from this EMR implementation in a large midwestern academic medical center. Included are five overarching lessons related to leadership, initial approach, training, support, and optimization as well as five lessons related to the EMR system itself that are particularly important elements of a successful implementation. PMID:26396558

  4. Effect of creatine supplementation and drop-set resistance training in untrained aging adults.

    PubMed

    Johannsmeyer, Sarah; Candow, Darren G; Brahms, C Markus; Michel, Deborah; Zello, Gordon A

    2016-10-01

    To investigate the effects of creatine supplementation and drop-set resistance training in untrained aging adults. Participants were randomized to one of two groups: Creatine (CR: n=14, 7 females, 7 males; 58.0±3.0yrs, 0.1g/kg/day of creatine+0.1g/kg/day of maltodextrin) or Placebo (PLA: n=17, 7 females, 10 males; age: 57.6±5.0yrs, 0.2g/kg/day of maltodextrin) during 12weeks of drop-set resistance training (3days/week; 2 sets of leg press, chest press, hack squat and lat pull-down exercises performed to muscle fatigue at 80% baseline 1-repetition maximum [1-RM] immediately followed by repetitions to muscle fatigue at 30% baseline 1-RM). Prior to and following training and supplementation, assessments were made for body composition, muscle strength, muscle endurance, tasks of functionality, muscle protein catabolism and diet. Drop-set resistance training improved muscle mass, muscle strength, muscle endurance and tasks of functionality (p<0.05). The addition of creatine to drop-set resistance training significantly increased body mass (p=0.002) and muscle mass (p=0.007) compared to placebo. Males on creatine increased muscle strength (lat pull-down only) to a greater extent than females on creatine (p=0.005). Creatine enabled males to resistance train at a greater capacity over time compared to males on placebo (p=0.049) and females on creatine (p=0.012). Males on creatine (p=0.019) and females on placebo (p=0.014) decreased 3-MH compared to females on creatine. The addition of creatine to drop-set resistance training augments the gains in muscle mass from resistance training alone. Creatine is more effective in untrained aging males compared to untrained aging females. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Training labels for hippocampal segmentation based on the EADC-ADNI harmonized hippocampal protocol.

    PubMed

    Boccardi, Marina; Bocchetta, Martina; Morency, Félix C; Collins, D Louis; Nishikawa, Masami; Ganzola, Rossana; Grothe, Michel J; Wolf, Dominik; Redolfi, Alberto; Pievani, Michela; Antelmi, Luigi; Fellgiebel, Andreas; Matsuda, Hiroshi; Teipel, Stefan; Duchesne, Simon; Jack, Clifford R; Frisoni, Giovanni B

    2015-02-01

    The European Alzheimer's Disease Consortium and Alzheimer's Disease Neuroimaging Initiative (ADNI) Harmonized Protocol (HarP) is a Delphi definition of manual hippocampal segmentation from magnetic resonance imaging (MRI) that can be used as the standard of truth to train new tracers, and to validate automated segmentation algorithms. Training requires large and representative data sets of segmented hippocampi. This work aims to produce a set of HarP labels for the proper training and certification of tracers and algorithms. Sixty-eight 1.5 T and 67 3 T volumetric structural ADNI scans from different subjects, balanced by age, medial temporal atrophy, and scanner manufacturer, were segmented by five qualified HarP tracers whose absolute interrater intraclass correlation coefficients were 0.953 and 0.975 (left and right). Labels were validated as HarP compliant through centralized quality check and correction. Hippocampal volumes (mm(3)) were as follows: controls: left = 3060 (standard deviation [SD], 502), right = 3120 (SD, 897); mild cognitive impairment (MCI): left = 2596 (SD, 447), right = 2686 (SD, 473); and Alzheimer's disease (AD): left = 2301 (SD, 492), right = 2445 (SD, 525). Volumes significantly correlated with atrophy severity at Scheltens' scale (Spearman's ρ = <-0.468, P = <.0005). Cerebrospinal fluid spaces (mm(3)) were as follows: controls: left = 23 (32), right = 25 (25); MCI: left = 15 (13), right = 22 (16); and AD: left = 11 (13), right = 20 (25). Five subjects (3.7%) presented with unusual anatomy. This work provides reference hippocampal labels for the training and certification of automated segmentation algorithms. The publicly released labels will allow the widespread implementation of the standard segmentation protocol. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  6. The Barriers and Facilitators to Transfer of Ultrasound-Guided Central Venous Line Skills From Simulation to Practice: Exploring Perceptions of Learners and Supervisors.

    PubMed

    Mema, Briseida; Harris, Ilene

    2016-01-01

    PHENOMENON: Ultrasound-guided central venous line insertion is currently the standard of care. Randomized controlled trials and systematic reviews show that simulation is superior to apprenticeship training. The purpose of this study is to explore, from the perspectives of participants in a simulation-training program, the factors that help or hinder the transfer of skills from simulation to practice. Purposeful sampling was used to select and study the experience and perspective of novice fellows after they had completed simulation training and then performed ultrasound-guided central venous line in practice. Seven novice pediatric intensive care unit fellows and six supervising faculty in a university-affiliated academic center in a large urban city were recruited between September 2012 and January 2013. We conducted a qualitative study using semistructured interviews as our data source, employing a constructivist, grounded theory methodology. Both curricular and real-life factors influence the transfer of skills from simulation to practice and the overall performance of trainees. Clear instructions, the opportunity to practice to mastery, one-on-one observation with feedback, supervision, and further real-life experiences were perceived as factors that facilitated the transfer of skills. Concern for patient welfare, live trouble shooting, complexity of the intensive care unit environment, and the procedure itself were perceived as real-life factors that hindered the transfer of skills. Insights: As more studies confirm the superiority of simulation training versus apprenticeship training for initial student learning, the faculty should gain insight into factors that facilitate and hinder the transfer of skills from simulation to bedside settings and impact learners' performances. As simulation further augments clinical learning, efforts should be made to modify the curricular and bedside factors that facilitate transfer of skills from simulation to practice settings.

  7. Domain Adaptation for Alzheimer’s Disease Diagnostics

    PubMed Central

    Wachinger, Christian; Reuter, Martin

    2016-01-01

    With the increasing prevalence of Alzheimer’s disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer’s disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy. PMID:27262241

  8. Breaking bad news in clinical setting - health professionals' experience and perceived competence in Southwestern Nigeria: a cross sectional study.

    PubMed

    Adebayo, Philip Babatunde; Abayomi, Olukayode; Johnson, Peter O; Oloyede, Taofeeq; Oyelekan, Abimbola A A

    2013-01-01

    Communication skills are vital in clinical settings because the manner in which bad news is delivered could be a huge determinant of responses to such news; as well as compliance with beneficial treatment option. Information on training, institutional guidelines and protocols for breaking bad news (BBN) is scarce in Nigeria. We assessed the training, experience and perceived competence of BBN among medical personnel in southwestern Nigeria. The study was a cross-sectional descriptive study conducted out among doctors and nurses in two healthcare institutions in southwestern Nigeria using an anonymous questionnaire (adapted from the survey by Horwitz et al.), which focused on the respondents training, awareness of protocols in BBN; and perceived competence (using a Five-Point Likert Scale) in five clinical scenarios. We equally asked the respondents about an instance of BBN they have recently witnessed. A total of 113 of 130 selected (response rate 86.9%) respondents were studied. Eight (7.1%) of the respondents knew of the guidelines on BBN in the hospital in which they work. Twenty-three (20.3%) respondents claimed knowledge of a protocol. The median perceived competence rating was 4 out of 5 in all the clinical scenarios. Twenty-five (22.1%) respondents have had a formal training in BBN and they generally had significant higher perceived competence rating (P = 0.003-0.021). There is poor support from fellow workers during instances of BBN. It appears that the large proportion of the respondents in this study were unconsciously incompetent in BBN in view of the low level of training and little or no knowledge of well known protocols for BBN even though self-rated competence is high. Continuous medical education in communication skills among health personnel in Nigeria is advocated.

  9. The 2014 Academic College of Emergency Experts in India's Education Development Committee (EDC) White Paper on establishing an academic department of Emergency Medicine in India – Guidelines for Staffing, Infrastructure, Resources, Curriculum and Training

    PubMed Central

    Aggarwal, Praveen; Galwankar, Sagar; Kalra, Om Prakash; Bhalla, Ashish; Bhoi, Sanjeev; Sundarakumar, Sundarajan

    2014-01-01

    Emergency medicine services and training in Emergency Medicine (EM) has developed to a large extent in developed countries but its establishment is far from optimal in developing countries. In India, Medical Council of India (MCI) has taken great steps by notifying EM as a separate specialty and so far 20 medical colleges have already initiated 3-year training program in EM. However, there has been shortage of trained faculty, and ambiguity regarding curriculum, rotation policy, infrastructure, teachers’ eligibility qualifications and scheme of examination. Academic College of Emergency Experts in India (ACEE-India) has been a powerful advocate for developing Academic EM in India. The ACEE's Education Development Committee (EDC) was created to chalk out guidelines for staffing, infrastructure, resources, curriculum, and training which may be of help to the MCI and the National Board of Examinations (NBE) to set standards for starting 3-year training program in EM and develop the departments of EM as centers of quality education, research, and treatment across India. This paper has made an attempt to give recommendations so as to provide a uniform framework to the institutions, thus guiding them towards establishing an academic Department of EM for starting the 3-year training program in the specialty of EM. PMID:25114431

  10. The 2014 Academic College of Emergency Experts in India's Education Development Committee (EDC) White Paper on establishing an academic department of Emergency Medicine in India - Guidelines for Staffing, Infrastructure, Resources, Curriculum and Training.

    PubMed

    Aggarwal, Praveen; Galwankar, Sagar; Kalra, Om Prakash; Bhalla, Ashish; Bhoi, Sanjeev; Sundarakumar, Sundarajan

    2014-07-01

    Emergency medicine services and training in Emergency Medicine (EM) has developed to a large extent in developed countries but its establishment is far from optimal in developing countries. In India, Medical Council of India (MCI) has taken great steps by notifying EM as a separate specialty and so far 20 medical colleges have already initiated 3-year training program in EM. However, there has been shortage of trained faculty, and ambiguity regarding curriculum, rotation policy, infrastructure, teachers' eligibility qualifications and scheme of examination. Academic College of Emergency Experts in India (ACEE-India) has been a powerful advocate for developing Academic EM in India. The ACEE's Education Development Committee (EDC) was created to chalk out guidelines for staffing, infrastructure, resources, curriculum, and training which may be of help to the MCI and the National Board of Examinations (NBE) to set standards for starting 3-year training program in EM and develop the departments of EM as centers of quality education, research, and treatment across India. This paper has made an attempt to give recommendations so as to provide a uniform framework to the institutions, thus guiding them towards establishing an academic Department of EM for starting the 3-year training program in the specialty of EM.

  11. Rethinking the role of fat oxidation: substrate utilisation during high-intensity interval training in well-trained and recreationally trained runners

    PubMed Central

    Hetlelid, Ken J; Plews, Daniel J; Herold, Eva; Laursen, Paul B; Seiler, Stephen

    2015-01-01

    Background Although carbohydrate is the predominant fuel source supporting high-intensity exercise workloads, the role of fat oxidation, and the degree to which it may be altered by training status, is less certain. Methods We compared substrate oxidation rates, using indirect calorimetry, during a high-intensity interval training (HIT) session in well-trained (WT) and recreationally trained (RT) runners. Following preliminary testing, 9 WT (VO2max 71±5 mL/min/kg) and 9 RT (VO2max 55±5 mL/min/kg) male runners performed a self-paced HIT sequence consisting of six, 4 min work bouts separated by 2 min recovery periods on a motorised treadmill set at a 5% gradient. Results WT and RT runners performed the HIT session with the same perceived effort (rating of perceived exertion (RPE) =18.3±0.7 vs 18.2±1.1, respectively), blood lactate (6.4±2.1 vs 6.2±2.5 mmol/L) and estimated carbohydrate oxidation rates (4.2±0.29 vs 4.4±0.45 g/min; effect size (ES) 90% confidence limits (CL)=−0.19±0.85). Fat oxidation (0.64±0.13 vs 0.22±0.16 g/min for WT and RT, respectively) accounted for 33±6% of the total energy expenditure in WT vs 16±6% in RT most likely very large difference in fat oxidation (ES 90% CL=1.74±0.83) runners. Higher rates of fat oxidation had a very large correlation with VO2max (r=0.86; 90% CI (0.7 to 0.94). Conclusions Despite similar RPE, blood lactate and carbohydrate oxidation rates, the better performance by the WT group was explained by their nearly threefold higher rates of fat oxidation at high intensity. PMID:27900134

  12. A water-based training program that include perturbation exercises to improve stepping responses in older adults: study protocol for a randomized controlled cross-over trial

    PubMed Central

    Melzer, Itshak; Elbar, Ori; Tsedek, Irit; Oddsson, Lars IE

    2008-01-01

    Background Gait and balance impairments may increase the risk of falls, the leading cause of accidental death in the elderly population. Fall-related injuries constitute a serious public health problem associated with high costs for society as well as human suffering. A rapid step is the most important protective postural strategy, acting to recover equilibrium and prevent a fall from initiating. It can arise from large perturbations, but also frequently as a consequence of volitional movements. We propose to use a novel water-based training program which includes specific perturbation exercises that will target the stepping responses that could potentially have a profound effect in reducing risk of falling. We describe the water-based balance training program and a study protocol to evaluate its efficacy (Trial registration number #NCT00708136). Methods/Design The proposed water-based training program involves use of unpredictable, multi-directional perturbations in a group setting to evoke compensatory and volitional stepping responses. Perturbations are made by pushing slightly the subjects and by water turbulence, in 24 training sessions conducted over 12 weeks. Concurrent cognitive tasks during movement tasks are included. Principles of physical training and exercise including awareness, continuity, motivation, overload, periodicity, progression and specificity were used in the development of this novel program. Specific goals are to increase the speed of stepping responses and improve the postural control mechanism and physical functioning. A prospective, randomized, cross-over trial with concealed allocation, assessor blinding and intention-to-treat analysis will be performed to evaluate the efficacy of the water-based training program. A total of 36 community-dwelling adults (age 65–88) with no recent history of instability or falling will be assigned to either the perturbation-based training or a control group (no training). Voluntary step reaction times and postural stability using stabiliogram diffusion analysis will be tested before and after the 12 weeks of training. Discussion This study will determine whether a water-based balance training program that includes perturbation exercises, in a group setting, can improve speed of voluntary stepping responses and improve balance control. Results will help guide the development of more cost-effective interventions that can prevent the occurrence of falls in the elderly. PMID:18706103

  13. Joint genomic evaluation of French dairy cattle breeds using multiple-trait models.

    PubMed

    Karoui, Sofiene; Carabaño, María Jesús; Díaz, Clara; Legarra, Andrés

    2012-12-07

    Using a multi-breed reference population might be a way of increasing the accuracy of genomic breeding values in small breeds. Models involving mixed-breed data do not take into account the fact that marker effects may differ among breeds. This study was aimed at investigating the impact on accuracy of increasing the number of genotyped candidates in the training set by using a multi-breed reference population, in contrast to single-breed genomic evaluations. Three traits (milk production, fat content and female fertility) were analyzed by genomic mixed linear models and Bayesian methodology. Three breeds of French dairy cattle were used: Holstein, Montbéliarde and Normande with 2976, 950 and 970 bulls in the training population, respectively and 964, 222 and 248 bulls in the validation population, respectively. All animals were genotyped with the Illumina Bovine SNP50 array. Accuracy of genomic breeding values was evaluated under three scenarios for the correlation of genomic breeding values between breeds (r(g)): uncorrelated (1), r(g) = 0; estimated r(g) (2); high, r(g) = 0.95 (3). Accuracy and bias of predictions obtained in the validation population with the multi-breed training set were assessed by the coefficient of determination (R(2)) and by the regression coefficient of daughter yield deviations of validation bulls on their predicted genomic breeding values, respectively. The genetic variation captured by the markers for each trait was similar to that estimated for routine pedigree-based genetic evaluation. Posterior means for rg ranged from -0.01 for fertility between Montbéliarde and Normande to 0.79 for milk yield between Montbéliarde and Holstein. Differences in R(2) between the three scenarios were notable only for fat content in the Montbéliarde breed: from 0.27 in scenario (1) to 0.33 in scenarios (2) and (3). Accuracies for fertility were lower than for other traits. Using a multi-breed reference population resulted in small or no increases in accuracy. Only the breed with a small data set and large genetic correlation with the breed with a large data set showed increased accuracy for the traits with moderate (milk) to high (fat content) heritability. No benefit was observed for fertility, a lowly heritable trait.

  14. Joint genomic evaluation of French dairy cattle breeds using multiple-trait models

    PubMed Central

    2012-01-01

    Background Using a multi-breed reference population might be a way of increasing the accuracy of genomic breeding values in small breeds. Models involving mixed-breed data do not take into account the fact that marker effects may differ among breeds. This study was aimed at investigating the impact on accuracy of increasing the number of genotyped candidates in the training set by using a multi-breed reference population, in contrast to single-breed genomic evaluations. Methods Three traits (milk production, fat content and female fertility) were analyzed by genomic mixed linear models and Bayesian methodology. Three breeds of French dairy cattle were used: Holstein, Montbéliarde and Normande with 2976, 950 and 970 bulls in the training population, respectively and 964, 222 and 248 bulls in the validation population, respectively. All animals were genotyped with the Illumina Bovine SNP50 array. Accuracy of genomic breeding values was evaluated under three scenarios for the correlation of genomic breeding values between breeds (rg): uncorrelated (1), rg = 0; estimated rg (2); high, rg = 0.95 (3). Accuracy and bias of predictions obtained in the validation population with the multi-breed training set were assessed by the coefficient of determination (R2) and by the regression coefficient of daughter yield deviations of validation bulls on their predicted genomic breeding values, respectively. Results The genetic variation captured by the markers for each trait was similar to that estimated for routine pedigree-based genetic evaluation. Posterior means for rg ranged from −0.01 for fertility between Montbéliarde and Normande to 0.79 for milk yield between Montbéliarde and Holstein. Differences in R2 between the three scenarios were notable only for fat content in the Montbéliarde breed: from 0.27 in scenario (1) to 0.33 in scenarios (2) and (3). Accuracies for fertility were lower than for other traits. Conclusions Using a multi-breed reference population resulted in small or no increases in accuracy. Only the breed with a small data set and large genetic correlation with the breed with a large data set showed increased accuracy for the traits with moderate (milk) to high (fat content) heritability. No benefit was observed for fertility, a lowly heritable trait. PMID:23216664

  15. Genome Properties and Prospects of Genomic Prediction of Hybrid Performance in a Breeding Program of Maize

    PubMed Central

    Technow, Frank; Schrag, Tobias A.; Schipprack, Wolfgang; Bauer, Eva; Simianer, Henner; Melchinger, Albrecht E.

    2014-01-01

    Maize (Zea mays L.) serves as model plant for heterosis research and is the crop where hybrid breeding was pioneered. We analyzed genomic and phenotypic data of 1254 hybrids of a typical maize hybrid breeding program based on the important Dent × Flint heterotic pattern. Our main objectives were to investigate genome properties of the parental lines (e.g., allele frequencies, linkage disequilibrium, and phases) and examine the prospects of genomic prediction of hybrid performance. We found high consistency of linkage phases and large differences in allele frequencies between the Dent and Flint heterotic groups in pericentromeric regions. These results can be explained by the Hill–Robertson effect and support the hypothesis of differential fixation of alleles due to pseudo-overdominance in these regions. In pericentromeric regions we also found indications for consistent marker–QTL linkage between heterotic groups. With prediction methods GBLUP and BayesB, the cross-validation prediction accuracy ranged from 0.75 to 0.92 for grain yield and from 0.59 to 0.95 for grain moisture. The prediction accuracy of untested hybrids was highest, if both parents were parents of other hybrids in the training set, and lowest, if none of them were involved in any training set hybrid. Optimizing the composition of the training set in terms of number of lines and hybrids per line could further increase prediction accuracy. We conclude that genomic prediction facilitates a paradigm shift in hybrid breeding by focusing on the performance of experimental hybrids rather than the performance of parental lines in testcrosses. PMID:24850820

  16. Distribution and cluster analysis of predicted intrinsically disordered protein Pfam domains

    PubMed Central

    Williams, Robert W; Xue, Bin; Uversky, Vladimir N; Dunker, A Keith

    2013-01-01

    The Pfam database groups regions of proteins by how well hidden Markov models (HMMs) can be trained to recognize similarities among them. Conservation pressure is probably in play here. The Pfam seed training set includes sequence and structure information, being drawn largely from the PDB. A long standing hypothesis among intrinsically disordered protein (IDP) investigators has held that conservation pressures are also at play in the evolution of different kinds of intrinsic disorder, but we find that predicted intrinsic disorder (PID) is not always conserved across Pfam domains. Here we analyze distributions and clusters of PID regions in 193024 members of the version 23.0 Pfam seed database. To include the maximum information available for proteins that remain unfolded in solution, we employ the 10 linearly independent Kidera factors1–3 for the amino acids, combined with PONDR4 predictions of disorder tendency, to transform the sequences of these Pfam members into an 11 column matrix where the number of rows is the length of each Pfam region. Cluster analyses of the set of all regions, including those that are folded, show 6 groupings of domains. Cluster analyses of domains with mean VSL2b scores greater than 0.5 (half predicted disorder or more) show at least 3 separated groups. It is hypothesized that grouping sets into shorter sequences with more uniform length will reveal more information about intrinsic disorder and lead to more finely structured and perhaps more accurate predictions. HMMs could be trained to include this information. PMID:28516017

  17. Long-Term Abstract Learning of Attentional Set

    ERIC Educational Resources Information Center

    Leber, Andrew B.; Kawahara, Jun-Ichiro; Gabari, Yuji

    2009-01-01

    How does past experience influence visual search strategy (i.e., attentional set)? Recent reports have shown that, when given the option to use 1 of 2 attentional sets, observers persist with the set previously required in a training phase. Here, 2 related questions are addressed. First, does the training effect result only from perseveration with…

  18. Intelligent System Development Using a Rough Sets Methodology

    NASA Technical Reports Server (NTRS)

    Anderson, Gray T.; Shelton, Robert O.

    1997-01-01

    The purpose of this research was to examine the potential of the rough sets technique for developing intelligent models of complex systems from limited information. Rough sets a simple but promising technology to extract easily understood rules from data. The rough set methodology has been shown to perform well when used with a large set of exemplars, but its performance with sparse data sets is less certain. The difficulty is that rules will be developed based on just a few examples, each of which might have a large amount of noise associated with them. The question then becomes, what is the probability of a useful rule being developed from such limited information? One nice feature of rough sets is that in unusual situations, the technique can give an answer of 'I don't know'. That is, if a case arises that is different from the cases the rough set rules were developed on, the methodology can recognize this and alert human operators of it. It can also be trained to do this when the desired action is unknown because conflicting examples apply to the same set of inputs. This summer's project was to look at combining rough set theory with statistical theory to develop confidence limits in rules developed by rough sets. Often it is important not to make a certain type of mistake (e.g., false positives or false negatives), so the rules must be biased toward preventing a catastrophic error, rather than giving the most likely course of action. A method to determine the best course of action in the light of such constraints was examined. The resulting technique was tested with files containing electrical power line 'signatures' from the space shuttle and with decompression sickness data.

  19. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  20. Geropsychology Training in a VA Nursing Home Setting

    ERIC Educational Resources Information Center

    Karel, Michele J.; Moye, Jennifer

    2005-01-01

    There is a growing need for professional psychology training in nursing home settings, and nursing homes provide a rich environment for teaching geropsychology competencies. We describe the nursing home training component of our Department of Veterans Affairs (VA) Predoctoral Internship and Geropsychology Postdoctoral Fellowship programs. Our…

  1. Development and Evaluation of a Train-the-Trainer Workshop for Hong Kong Community Social Service Agency Staff.

    PubMed

    Zhou, Qianling; Stewart, Sunita M; Wan, Alice; Leung, Charles Sai-Cheong; Lai, Agnes Y; Lam, Tai Hing; Chan, Sophia Siu-Chee

    2017-01-01

    Capacity building approaches are useful in large-scale community-based health promotion interventions. However, models to guide and evaluate capacity building among social service agency staff in community settings are rare in the literature. This paper describes the development and evaluation of a 1-day (7 h) train-the-trainer (TTT) workshop for the "Enhancing Family Well-Being Project". The workshop aimed at equipping staff from different community agencies with the knowledge and skills to design, implement, and evaluate positive psychology-based interventions for their clients in Sham Shui Po, an over-crowded and low-income district in Hong Kong. The current TTT extended and improved on our previous successful model by adding research and evaluation methods (including the Logic Model, process evaluation, and randomized controlled trial), which are important to plan and evaluate the community interventions. Evaluation of the TTT was guided by the Integrated Model of Training Evaluation and Effectiveness (IMTEE), with quantitative and qualitative methods. Quantitative data were collected from pretraining (T1), post-training (T2), and 6-month (T3) and 12-month (T4) follow-up surveys. Qualitative data were collected from four focus groups of agency staff after the intervention. Ninety-three staff from 30 community agencies attended the training, and 90 completed the baseline survey. Eighty-eight, 63, and 57 staff performed the evaluations at T2, T3, and T4, respectively. Agency staff were satisfied with the TTT. Immediate enhancement of knowledge, self-efficacy, and positive attitudes toward the training content was found at T2 (Cohen's d ranged from 0.24 to 1.22, all p  < 0.05). Enhancement of knowledge of all training contents persisted at T3 and T4 (Cohen's d ranged from 0.34 to 0.63, all p  < 0.05). Enhancement of self-efficacy in the use of positive psychology in intervention design persisted at T3 (Cohen's d  = 0.22, p  = 0.04). The skills learned were utilized to plan and develop subsequent interventions. Twenty-nine interventions were successfully designed and implemented by the agency staff, and delivered to 1,586 participants. The agency staff indicated their intention to utilize the skills they had learned for other interventions (score ≥4 out of 6) and to share these skills with their colleagues. Qualitative feedbacks from 23 agency staff supported the quantitative results. Our brief TTT was effectively delivered to a large number of agency staff and showed effects that persisted up to 12 months. Our training and evaluation models may offer a template for capacity building among social service agency staff for community brief, universal family health promotion interventions in diverse settings.

  2. Exercise-training intervention studies in competitive swimming.

    PubMed

    Aspenes, Stian Thoresen; Karlsen, Trine

    2012-06-01

    Competitive swimming has a long history and is currently one of the largest Olympic sports, with 16 pool events. Several aspects separate swimming from most other sports such as (i) the prone position; (ii) simultaneous use of arms and legs for propulsion; (iii) water immersion (i.e. hydrostatic pressure on thorax and controlled respiration); (iv) propulsive forces that are applied against a fluctuant element; and (v) minimal influence of equipment on performance. Competitive swimmers are suggested to have specific anthropometrical features compared with other athletes, but are nevertheless dependent on physiological adaptations to enhance their performance. Swimmers thus engage in large volumes of training in the pool and on dry land. Strength training of various forms is widely used, and the energetic systems are addressed by aerobic and anaerobic swimming training. The aim of the current review was to report results from controlled exercise training trials within competitive swimming. From a structured literature search we found 17 controlled intervention studies that covered strength or resistance training, assisted sprint swimming, arms-only training, leg-kick training, respiratory muscle training, training the energy delivery systems and combined interventions across the aforementioned categories. Nine of the included studies were randomized controlled trials. Among the included studies we found indications that heavy strength training on dry land (one to five repetitions maximum with pull-downs for three sets with maximal effort in the concentric phase) or sprint swimming with resistance towards propulsion (maximal pushing with the arms against fixed points or pulling a perforated bowl) may be efficient for enhanced performance, and may also possibly have positive effects on stroke mechanics. The largest effect size (ES) on swimming performance was found in 50 m freestyle after a dry-land strength training regimen of maximum six repetitions across three sets in relevant muscle-groups (ES 1.05), and after a regimen of resisted- and assisted-sprint training with elastic surgical tubes (ES 1.21). Secondly, several studies suggest that high training volumes do not pose any immediate advantage over lower volumes (with higher intensity) for swim performance. Overall, very few studies were eligible for the current review although the search strategy was broad and fairly liberal. The included studies predominantly involved freestyle swimming and, overall, there seems to be more questions than answers within intervention-based competitive swimming research. We believe that this review may encourage other researchers to pursue the interesting topics within the physiology of competitive swimming.

  3. Exercise order affects the total training volume and the ratings of perceived exertion in response to a super-set resistance training session

    PubMed Central

    Balsamo, Sandor; Tibana, Ramires Alsamir; Nascimento, Dahan da Cunha; de Farias, Gleyverton Landim; Petruccelli, Zeno; de Santana, Frederico dos Santos; Martins, Otávio Vanni; de Aguiar, Fernando; Pereira, Guilherme Borges; de Souza, Jéssica Cardoso; Prestes, Jonato

    2012-01-01

    The super-set is a widely used resistance training method consisting of exercises for agonist and antagonist muscles with limited or no rest interval between them – for example, bench press followed by bent-over rows. In this sense, the aim of the present study was to compare the effects of different super-set exercise sequences on the total training volume. A secondary aim was to evaluate the ratings of perceived exertion and fatigue index in response to different exercise order. On separate testing days, twelve resistance-trained men, aged 23.0 ± 4.3 years, height 174.8 ± 6.75 cm, body mass 77.8 ± 13.27 kg, body fat 12.0% ± 4.7%, were submitted to a super-set method by using two different exercise orders: quadriceps (leg extension) + hamstrings (leg curl) (QH) or hamstrings (leg curl) + quadriceps (leg extension) (HQ). Sessions consisted of three sets with a ten-repetition maximum load with 90 seconds rest between sets. Results revealed that the total training volume was higher for the HQ exercise order (P = 0.02) with lower perceived exertion than the inverse order (P = 0.04). These results suggest that HQ exercise order involving lower limbs may benefit practitioners interested in reaching a higher total training volume with lower ratings of perceived exertion compared with the leg extension plus leg curl order. PMID:22371654

  4. Advances in addressing technical challenges of point-of-care diagnostics in resource-limited settings

    PubMed Central

    Wang, ShuQi; Lifson, Mark A.; Inci, Fatih; Liang, Li-Guo; Sheng, Ye-Feng; Demirci, Utkan

    2016-01-01

    The striking prevalence of HIV, TB and malaria, as well as outbreaks of emerging infectious diseases, such as influenza A (H7N9), Ebola and MERS, poses great challenges for patient care in resource-limited settings (RLS). However, advanced diagnostic technologies cannot be implemented in RLS largely due to economic constraints. Simple and inexpensive point-of-care (POC) diagnostics, which rely less on environmental context and operator training, have thus been extensively studied to achieve early diagnosis and treatment monitoring in non-laboratory settings. Despite great input from material science, biomedical engineering and nanotechnology for developing POC diagnostics, significant technical challenges are yet to be overcome. Summarized here are the technical challenges associated with POC diagnostics from a RLS perspective and the latest advances in addressing these challenges are reviewed. PMID:26777725

  5. Performance Evaluation of State of the Art Systems for Physical Activity Classification of Older Subjects Using Inertial Sensors in a Real Life Scenario: A Benchmark Study

    PubMed Central

    Awais, Muhammad; Palmerini, Luca; Bourke, Alan K.; Ihlen, Espen A. F.; Helbostad, Jorunn L.; Chiari, Lorenzo

    2016-01-01

    The popularity of using wearable inertial sensors for physical activity classification has dramatically increased in the last decade due to their versatility, low form factor, and low power requirements. Consequently, various systems have been developed to automatically classify daily life activities. However, the scope and implementation of such systems is limited to laboratory-based investigations. Furthermore, these systems are not directly comparable, due to the large diversity in their design (e.g., number of sensors, placement of sensors, data collection environments, data processing techniques, features set, classifiers, cross-validation methods). Hence, the aim of this study is to propose a fair and unbiased benchmark for the field-based validation of three existing systems, highlighting the gap between laboratory and real-life conditions. For this purpose, three representative state-of-the-art systems are chosen and implemented to classify the physical activities of twenty older subjects (76.4 ± 5.6 years). The performance in classifying four basic activities of daily life (sitting, standing, walking, and lying) is analyzed in controlled and free living conditions. To observe the performance of laboratory-based systems in field-based conditions, we trained the activity classification systems using data recorded in a laboratory environment and tested them in real-life conditions in the field. The findings show that the performance of all systems trained with data in the laboratory setting highly deteriorates when tested in real-life conditions, thus highlighting the need to train and test the classification systems in the real-life setting. Moreover, we tested the sensitivity of chosen systems to window size (from 1 s to 10 s) suggesting that overall accuracy decreases with increasing window size. Finally, to evaluate the impact of the number of sensors on the performance, chosen systems are modified considering only the sensing unit worn at the lower back. The results, similarly to the multi-sensor setup, indicate substantial degradation of the performance when laboratory-trained systems are tested in the real-life setting. This degradation is higher than in the multi-sensor setup. Still, the performance provided by the single-sensor approach, when trained and tested with real data, can be acceptable (with an accuracy above 80%). PMID:27973434

  6. An Impaled Potential Unexploded Device in the Civilian Training Trauma Setting: A Case Report and Review of the Literature

    DTIC Science & Technology

    2017-12-11

    responsibility.(!) 62 Additionally, eme rgency physicians need to know how to manage a patient with 63 an impaled unexploded device. Improper...his leg during the explosion. He was evaluated by EMS in 76 the field where his limb was noted to be grossly unstable with a large anterior soft 77...including roadside 127 explosives, explosive formed projectile devices and suicide bombs .(S) 128 In the United States military medical literature

  7. Fusion of footsteps and face biometrics on an unsupervised and uncontrolled environment

    NASA Astrophysics Data System (ADS)

    Vera-Rodriguez, Ruben; Tome, Pedro; Fierrez, Julian; Ortega-Garcia, Javier

    2012-06-01

    This paper reports for the first time experiments on the fusion of footsteps and face on an unsupervised and not controlled environment for person authentication. Footstep recognition is a relatively new biometric based on signals extracted from people walking over floor sensors. The idea of the fusion between footsteps and face starts from the premise that in an area where footstep sensors are installed it is very simple to place a camera to capture also the face of the person that walks over the sensors. This setup may find application in scenarios like ambient assisted living, smart homes, eldercare, or security access. The paper reports a comparative assessment of both biometrics using the same database and experimental protocols. In the experimental work we consider two different applications: smart homes (small group of users with a large set of training data) and security access (larger group of users with a small set of training data) obtaining results of 0.9% and 5.8% EER respectively for the fusion of both modalities. This is a significant performance improvement compared with the results obtained by the individual systems.

  8. DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.

    PubMed

    Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou

    2016-07-07

    In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.

  9. Automatic classification of radiological reports for clinical care.

    PubMed

    Gerevini, Alfonso Emilio; Lavelli, Alberto; Maffi, Alessandro; Maroldi, Roberto; Minard, Anne-Lyse; Serina, Ivan; Squassina, Guido

    2018-06-07

    Radiological reporting generates a large amount of free-text clinical narratives, a potentially valuable source of information for improving clinical care and supporting research. The use of automatic techniques to analyze such reports is necessary to make their content effectively available to radiologists in an aggregated form. In this paper we focus on the classification of chest computed tomography reports according to a classification schema proposed for this task by radiologists of the Italian hospital ASST Spedali Civili di Brescia. The proposed system is built exploiting a training data set containing reports annotated by radiologists. Each report is classified according to the schema developed by radiologists and textual evidences are marked in the report. The annotations are then used to train different machine learning based classifiers. We present in this paper a method based on a cascade of classifiers which make use of a set of syntactic and semantic features. The resulting system is a novel hierarchical classification system for the given task, that we have experimentally evaluated. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Civil Legal Services and Medical-Legal Partnerships Needed by the Homeless Population: A National Survey.

    PubMed

    Tsai, Jack; Jenkins, Darlene; Lawton, Ellen

    2017-03-01

    To examine civil legal needs among people experiencing homelessness and the extent to which medical-legal partnerships exist in homeless service sites, which promote the integration of civil legal aid professionals into health care settings. We surveyed a national sample of 48 homeless service sites across 26 states in November 2015. The survey asked about needs, attitudes, and practices related to civil legal issues, including medical-legal partnerships. More than 90% of the homeless service sites reported that their patients experienced at least 1 civil legal issue, particularly around housing, employment, health insurance, and disability benefits. However, only half of all sites reported screening patients for civil legal issues, and only 10% had a medical-legal partnership. The large majority of sites reported interest in receiving training on screening for civil legal issues and developing medical-legal partnerships. There is great need and potential to deploy civil legal services in health settings to serve unstably housed populations. Training homeless service providers how to screen for civil legal issues and how to develop medical-legal partnerships would better equip them to provide comprehensive care.

  11. Track monitoring from the dynamic response of a passing train: A sparse approach

    NASA Astrophysics Data System (ADS)

    Lederman, George; Chen, Siheng; Garrett, James H.; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo

    2017-06-01

    Collecting vibration data from revenue service trains could be a low-cost way to more frequently monitor railroad tracks, yet operational variability makes robust analysis a challenge. We propose a novel analysis technique for track monitoring that exploits the sparsity inherent in train-vibration data. This sparsity is based on the observation that large vertical train vibrations typically involve the excitation of the train's fundamental mode due to track joints, switchgear, or other discrete hardware. Rather than try to model the entire rail profile, in this study we examine a sparse approach to solving an inverse problem where (1) the roughness is constrained to a discrete and limited set of "bumps"; and (2) the train system is idealized as a simple damped oscillator that models the train's vibration in the fundamental mode. We use an expectation maximization (EM) approach to iteratively solve for the track profile and the train system properties, using orthogonal matching pursuit (OMP) to find the sparse approximation within each step. By enforcing sparsity, the inverse problem is well posed and the train's position can be found relative to the sparse bumps, thus reducing the uncertainty in the GPS data. We validate the sparse approach on two sections of track monitored from an operational train over a 16 month period of time, one where track changes did not occur during this period and another where changes did occur. We show that this approach can not only detect when track changes occur, but also offers insight into the type of such changes.

  12. Broadband seismic effects from train vibrations

    NASA Astrophysics Data System (ADS)

    Fuchs, Florian; Bokelmann, Götz

    2017-04-01

    Seismologists rarely study train induced vibrations which are mainly regarded an unwanted source of noise for classical seismological applications such as earthquake monitoring. A few seismological studies try to utilize train vibrations however as active sources, e.g. for subsurface imaging, but they do not focus on the characteristics of the train signal itself. Most available studies on train induced vibrations take an engineering approach and aim at better understanding the generation and short-distance propagation of train induced vibrations, mainly for mitigation and construction purposes. They mostly rely on numerical simulations and/or short-period or accelerometer recordings obtained directly on the train track or up to few hundred meters away and almost no studies exist with seismic recordings further away from the track. In some of these previous studies sharp and equidistant peaks are present in the vibration spectrum of heavy freight trains, but they do not attempt to explain them. Here we show and analyze various train vibration signals obtained from a set of seismic broadband stations installed in the context of the temporary, large-scale regional seismic network AlpArray. The geometrical restrictions of this seismic network combined with budget and safety considerations resulted in a number of broad-band instruments deployed in the vicinity of busy railway lines. On these stations we observe very characteristic seismic signals associated with different types of trains, typically showing pronounced equidistant spectral lines over a wide frequency range. In this study we analyze the nature of such signals and discuss if they are generated by a source effect or by wave propagation effects in near-surface soil layers.

  13. In Silico Mining for Antimalarial Structure-Activity Knowledge and Discovery of Novel Antimalarial Curcuminoids.

    PubMed

    Viira, Birgit; Gendron, Thibault; Lanfranchi, Don Antoine; Cojean, Sandrine; Horvath, Dragos; Marcou, Gilles; Varnek, Alexandre; Maes, Louis; Maran, Uko; Loiseau, Philippe M; Davioud-Charvet, Elisabeth

    2016-06-29

    Malaria is a parasitic tropical disease that kills around 600,000 patients every year. The emergence of resistant Plasmodium falciparum parasites to artemisinin-based combination therapies (ACTs) represents a significant public health threat, indicating the urgent need for new effective compounds to reverse ACT resistance and cure the disease. For this, extensive curation and homogenization of experimental anti-Plasmodium screening data from both in-house and ChEMBL sources were conducted. As a result, a coherent strategy was established that allowed compiling coherent training sets that associate compound structures to the respective antimalarial activity measurements. Seventeen of these training sets led to the successful generation of classification models discriminating whether a compound has a significant probability to be active under the specific conditions of the antimalarial test associated with each set. These models were used in consensus prediction of the most likely active from a series of curcuminoids available in-house. Positive predictions together with a few predicted as inactive were then submitted to experimental in vitro antimalarial testing. A large majority from predicted compounds showed antimalarial activity, but not those predicted as inactive, thus experimentally validating the in silico screening approach. The herein proposed consensus machine learning approach showed its potential to reduce the cost and duration of antimalarial drug discovery.

  14. Performance Measures for Adaptive Decisioning Systems

    DTIC Science & Technology

    1991-09-11

    set to hypothesis space mapping best approximates the known map. Two assumptions, a sufficiently representative training set and the ability of the...successful prediction of LINEXT performance. The LINEXT algorithm above performs the decision space mapping on the training-set ele- ments exactly. For a

  15. Coordinating a national rangeland monitoring training program: Success and lessons learned

    USDA-ARS?s Scientific Manuscript database

    One of the best ways to ensure quality of information gathered in a rangeland monitoring program is through a strong and uniform set of trainings. Curriculum development and delivery of monitoring trainings poses unique challenges that are not seen in academic settings. Participants come from a rang...

  16. Manual cleaning of hospital mattresses: an observational study comparing high- and low-resource settings.

    PubMed

    Hopman, J; Hakizimana, B; Meintjes, W A J; Nillessen, M; de Both, E; Voss, A; Mehtar, S

    2016-01-01

    Hospital-associated infections (HAIs) are more frequently encountered in low- than in high-resource settings. There is a need to identify and implement feasible and sustainable approaches to strengthen HAI prevention in low-resource settings. To evaluate the biological contamination of routinely cleaned mattresses in both high- and low-resource settings. In this two-stage observational study, routine manual bed cleaning was evaluated at two university hospitals using adenosine triphosphate (ATP). Standardized training of cleaning personnel was achieved in both high- and low-resource settings. Qualitative analysis of the cleaning process was performed to identify predictors of cleaning outcome in low-resource settings. Mattresses in low-resource settings were highly contaminated prior to cleaning. Cleaning significantly reduced biological contamination of mattresses in low-resource settings (P < 0.0001). After training, the contamination observed after cleaning in both the high- and low-resource settings seemed comparable. Cleaning with appropriate type of cleaning materials reduced the contamination of mattresses adequately. Predictors for mattresses that remained contaminated in a low-resource setting included: type of product used, type of ward, training, and the level of contamination prior to cleaning. In low-resource settings mattresses were highly contaminated as noted by ATP levels. Routine manual cleaning by trained staff can be as effective in a low-resource setting as in a high-resource setting. We recommend a multi-modal cleaning strategy that consists of training of domestic services staff, availability of adequate time to clean beds between patients, and application of the correct type of cleaning products. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  17. Teaching between-class generalization of toy play behavior to handicapped children.

    PubMed Central

    Haring, T G

    1985-01-01

    In this study, young children with severe and moderate handicaps were taught to generalize play responses. A multiple baseline across responses design, replicated with four children, was used to assess the effects of generalization training within four sets of toys on generalization to untrained toys from four other sets. The responses taught were unique for each set of toys. Across the four participants, training to generalize within-toy sets resulted in complete between-class generalization in 11 sets, partial generalization in 3 sets, and no generalization in 2 sets. No generalization occurred to another class of toys that differed from the previous sets in that they produced a reaction to the play movement (e.g., pianos). Implications for conducting research using strategies based on class interrelationships in training contexts are discussed. PMID:4019349

  18. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems

    PubMed Central

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance–performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system. PMID:27598390

  19. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems.

    PubMed

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system.

  20. Baryonic effects in cosmic shear tomography: PCA parametrization and importance of extreme baryonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammed, Irshad; Gnedin, Nickolay Y.

    Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less

Top