Fourier spatial frequency analysis for image classification: training the training set
NASA Astrophysics Data System (ADS)
Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart
2016-04-01
The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.
How large a training set is needed to develop a classifier for microarray data?
Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M
2008-01-01
A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.
NASA Astrophysics Data System (ADS)
Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.
2017-05-01
Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.
A simple method to derive bounds on the size and to train multilayer neural networks
NASA Technical Reports Server (NTRS)
Sartori, Michael A.; Antsaklis, Panos J.
1991-01-01
A new derivation is presented for the bounds on the size of a multilayer neural network to exactly implement an arbitrary training set; namely, the training set can be implemented with zero error with two layers and with the number of the hidden-layer neurons equal to no.1 is greater than p - 1. The derivation does not require the separation of the input space by particular hyperplanes, as in previous derivations. The weights for the hidden layer can be chosen almost arbitrarily, and the weights for the output layer can be found by solving no.1 + 1 linear equations. The method presented exactly solves (M), the multilayer neural network training problem, for any arbitrary training set.
Optimization of genomic selection training populations with a genetic algorithm
USDA-ARS?s Scientific Manuscript database
In this article, we derive a computationally efficient statistic to measure the reliability of estimates of genetic breeding values for a fixed set of genotypes based on a given training set of genotypes and phenotypes. We adopt a genetic algorithm scheme to find a training set of certain size from ...
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Gros, Eric
2018-03-01
Deep Learning (DL) has been successfully applied in numerous fields fueled by increasing computational power and access to data. However, for medical imaging tasks, limited training set size is a common challenge when applying DL. This paper explores the applicability of DL to the task of classifying a single axial slice from a CT exam into one of six anatomy regions. A total of 29000 images selected from 223 CT exams were manually labeled for ground truth. An additional 54 exams were labeled and used as an independent test set. The network architecture developed for this application is composed of 6 convolutional layers and 2 fully connected layers with RELU non-linear activations between each layer. Max-pooling was used after every second convolutional layer, and a softmax layer was used at the end. Given this base architecture, the effect of inclusion of network architecture components such as Dropout and Batch Normalization on network performance and training is explored. The network performance as a function of training and validation set size is characterized by training each network architecture variation using 5,10,20,40,50 and 100% of the available training data. The performance comparison of the various network architectures was done for anatomy classification as well as two computer vision datasets. The anatomy classifier accuracy varied from 74.1% to 92.3% in this study depending on the training size and network layout used. Dropout layers improved the model accuracy for all training sizes.
Buchler, Norbou G; Hoyer, William J; Cerella, John
2008-06-01
Task-switching performance was assessed in young and older adults as a function of the number of task sets to be actively maintained in memory (varied from 1 to 4) over the course of extended training (5 days). Each of the four tasks required the execution of a simple computational algorithm, which was instantaneously cued by the color of the two-digit stimulus. Tasks were presented in pure (task set size 1) and mixed blocks (task set sizes 2, 3, 4), and the task sequence was unpredictable. By considering task switching beyond two tasks, we found evidence for a cognitive control system that is not overwhelmed by task set size load manipulations. Extended training eliminated age effects in task-switching performance, even when the participants had to manage the execution of up to four tasks. The results are discussed in terms of current theories of cognitive control, including task set inertia and production system postulates.
The effect of sample size and disease prevalence on supervised machine learning of narrative data.
McKnight, Lawrence K.; Wilcox, Adam; Hripcsak, George
2002-01-01
This paper examines the independent effects of outcome prevalence and training sample sizes on inductive learning performance. We trained 3 inductive learning algorithms (MC4, IB, and Naïve-Bayes) on 60 simulated datasets of parsed radiology text reports labeled with 6 disease states. Data sets were constructed to define positive outcome states at 4 prevalence rates (1, 5, 10, 25, and 50%) in training set sizes of 200 and 2,000 cases. We found that the effect of outcome prevalence is significant when outcome classes drop below 10% of cases. The effect appeared independent of sample size, induction algorithm used, or class label. Work is needed to identify methods of improving classifier performance when output classes are rare. PMID:12463878
NASA Astrophysics Data System (ADS)
Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue
2015-04-01
Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.
Training set selection for the prediction of essential genes.
Cheng, Jian; Xu, Zhao; Wu, Wenwu; Zhao, Li; Li, Xiangchen; Liu, Yanlin; Tao, Shiheng
2014-01-01
Various computational models have been developed to transfer annotations of gene essentiality between organisms. However, despite the increasing number of microorganisms with well-characterized sets of essential genes, selection of appropriate training sets for predicting the essential genes of poorly-studied or newly sequenced organisms remains challenging. In this study, a machine learning approach was applied reciprocally to predict the essential genes in 21 microorganisms. Results showed that training set selection greatly influenced predictive accuracy. We determined four criteria for training set selection: (1) essential genes in the selected training set should be reliable; (2) the growth conditions in which essential genes are defined should be consistent in training and prediction sets; (3) species used as training set should be closely related to the target organism; and (4) organisms used as training and prediction sets should exhibit similar phenotypes or lifestyles. We then analyzed the performance of an incomplete training set and an integrated training set with multiple organisms. We found that the size of the training set should be at least 10% of the total genes to yield accurate predictions. Additionally, the integrated training sets exhibited remarkable increase in stability and accuracy compared with single sets. Finally, we compared the performance of the integrated training sets with the four criteria and with random selection. The results revealed that a rational selection of training sets based on our criteria yields better performance than random selection. Thus, our results provide empirical guidance on training set selection for the identification of essential genes on a genome-wide scale.
Set Shifting Training with Categorization Tasks
Soveri, Anna; Waris, Otto; Laine, Matti
2013-01-01
The very few cognitive training studies targeting an important executive function, set shifting, have reported performance improvements that also generalized to untrained tasks. The present randomized controlled trial extends set shifting training research by comparing previously used cued training with uncued training. A computerized adaptation of the Wisconsin Card Sorting Test was utilized as the training task in a pretest-posttest experimental design involving three groups of university students. One group received uncued training (n = 14), another received cued training (n = 14) and the control group (n = 14) only participated in pre- and posttests. The uncued training group showed posttraining performance increases on their training task, but neither training group showed statistically significant transfer effects. Nevertheless, comparison of effect sizes for transfer effects indicated that our results did not differ significantly from the previous studies. Our results suggest that the cognitive effects of computerized set shifting training are mostly task-specific, and would preclude any robust generalization effects with this training. PMID:24324717
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
NASA Astrophysics Data System (ADS)
Amit, Guy; Ben-Ari, Rami; Hadad, Omer; Monovich, Einat; Granot, Noa; Hashoul, Sharbell
2017-03-01
Diagnostic interpretation of breast MRI studies requires meticulous work and a high level of expertise. Computerized algorithms can assist radiologists by automatically characterizing the detected lesions. Deep learning approaches have shown promising results in natural image classification, but their applicability to medical imaging is limited by the shortage of large annotated training sets. In this work, we address automatic classification of breast MRI lesions using two different deep learning approaches. We propose a novel image representation for dynamic contrast enhanced (DCE) breast MRI lesions, which combines the morphological and kinetics information in a single multi-channel image. We compare two classification approaches for discriminating between benign and malignant lesions: training a designated convolutional neural network and using a pre-trained deep network to extract features for a shallow classifier. The domain-specific trained network provided higher classification accuracy, compared to the pre-trained model, with an area under the ROC curve of 0.91 versus 0.81, and an accuracy of 0.83 versus 0.71. Similar accuracy was achieved in classifying benign lesions, malignant lesions, and normal tissue images. The trained network was able to improve accuracy by using the multi-channel image representation, and was more robust to reductions in the size of the training set. A small-size convolutional neural network can learn to accurately classify findings in medical images using only a few hundred images from a few dozen patients. With sufficient data augmentation, such a network can be trained to outperform a pre-trained out-of-domain classifier. Developing domain-specific deep-learning models for medical imaging can facilitate technological advancements in computer-aided diagnosis.
Multilingual Twitter Sentiment Classification: The Role of Human Annotators
Mozetič, Igor; Grčar, Miha; Smailović, Jasmina
2016-01-01
What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered. PMID:27149621
Training and Assessment of Hysteroscopic Skills: A Systematic Review.
Savran, Mona Meral; Sørensen, Stine Maya Dreier; Konge, Lars; Tolsgaard, Martin G; Bjerrum, Flemming
2016-01-01
The aim of this systematic review was to identify studies on hysteroscopic training and assessment. PubMed, Excerpta Medica, the Cochrane Library, and Web of Science were searched in January 2015. Manual screening of references and citation tracking were also performed. Studies on hysteroscopic educational interventions were selected without restrictions on study design, populations, language, or publication year. A qualitative data synthesis including the setting, study participants, training model, training characteristics, hysteroscopic skills, assessment parameters, and study outcomes was performed by 2 authors working independently. Effect sizes were calculated when possible. Overall, 2 raters independently evaluated sources of validity evidence supporting the outcomes of the hysteroscopy assessment tools. A total of 25 studies on hysteroscopy training were identified, of which 23 were performed in simulated settings. Overall, 10 studies used virtual-reality simulators and reported effect sizes for technical skills ranging from 0.31 to 2.65; 12 used inanimate models and reported effect sizes for technical skills ranging from 0.35 to 3.19. One study involved live animal models; 2 studies were performed in clinical settings. The validity evidence supporting the assessment tools used was low. Consensus between the 2 raters on the reported validity evidence was high (94%). This systematic review demonstrated large variations in the effect of different tools for hysteroscopy training. The validity evidence supporting the assessment of hysteroscopic skills was limited. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Baron, Danielle M; Ramirez, Alejandro J; Bulitko, Vadim; Madan, Christopher R; Greiner, Ariel; Hurd, Peter L; Spetch, Marcia L
2015-01-01
Visiting multiple locations and returning to the start via the shortest route, referred to as the traveling salesman (or salesperson) problem (TSP), is a valuable skill for both humans and non-humans. In the current study, pigeons were trained with increasing set sizes of up to six goals, with each set size presented in three distinct configurations, until consistency in route selection emerged. After training at each set size, the pigeons were tested with two novel configurations. All pigeons acquired routes that were significantly more efficient (i.e., shorter in length) than expected by chance selection of the goals. On average, the pigeons also selected routes that were more efficient than expected based on a local nearest-neighbor strategy and were as efficient as the average route generated by a crossing-avoidance strategy. Analysis of the routes taken indicated that they conformed to both a nearest-neighbor and a crossing-avoidance strategy significantly more often than expected by chance. Both the time taken to visit all goals and the actual distance traveled decreased from the first to the last trials of training in each set size. On the first trial with novel configurations, average efficiency was higher than chance, but was not higher than expected from a nearest-neighbor or crossing-avoidance strategy. These results indicate that pigeons can learn to select efficient routes on a TSP problem.
NASA Astrophysics Data System (ADS)
Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.
2013-03-01
Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.
Maintenance of Velocity and Power With Cluster Sets During High-Volume Back Squats.
Tufano, James J; Conlon, Jenny A; Nimphius, Sophia; Brown, Lee E; Seitz, Laurent B; Williamson, Bryce D; Haff, G Gregory
2016-10-01
To compare the effects of a traditional set structure and 2 cluster set structures on force, velocity, and power during back squats in strength-trained men. Twelve men (25.8 ± 5.1 y, 1.74 ± 0.07 m, 79.3 ± 8.2 kg) performed 3 sets of 12 repetitions at 60% of 1-repetition maximum using 3 different set structures: traditional sets (TS), cluster sets of 4 (CS4), and cluster sets of 2 (CS2). When averaged across all repetitions, peak velocity (PV), mean velocity (MV), peak power (PP), and mean power (MP) were greater in CS2 and CS4 than in TS (P < .01), with CS2 also resulting in greater values than CS4 (P < .02). When examining individual sets within each set structure, PV, MV, PP, and MP decreased during the course of TS (effect sizes 0.28-0.99), whereas no decreases were noted during CS2 (effect sizes 0.00-0.13) or CS4 (effect sizes 0.00-0.29). These results demonstrate that CS structures maintain velocity and power, whereas TS structures do not. Furthermore, increasing the frequency of intraset rest intervals in CS structures maximizes this effect and should be used if maximal velocity is to be maintained during training.
T-wave end detection using neural networks and Support Vector Machines.
Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román
2018-05-01
In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hudson, Geoffrey M; Green, J Matt; Bishop, Phillip A; Richardson, Mark T
2008-11-01
This study compared independent effects of caffeine and aspirin on muscular endurance (repetitions), heart rate (HR), perceived exertion (RPE), and perceived pain index (PPI) during light resistance training bouts performed to volitional failure. It was hypothesized that the hypoalgesic properties of these ergogenic aids would decrease pain perception and potentially result in enhanced performance. College-aged men (n = 15) participated in a within-subjects, double-blind study with three independent, counterbalanced sessions wherein aspirin (10 mg x kg(-1)), caffeine (6 mg x kg(-1)), or matched placebo were ingested 1 hour before exercise, and RPE, HR, PPI, and repetitions (per set and total per exercise) were recorded at 100% of individual, predetermined, 12-repetition maximum for leg extensions (LE) and seated arm curls (AC). Repeated-measures analyses of variance were used for between-trial comparisons. Caffeine resulted in significantly greater (p < 0.05) HR (LE and AC), total repetitions (LE), and repetitions in set 1 (LE and AC) compared with aspirin and placebo. Aspirin resulted in significantly higher PPI in set 1 (LE). In LE, 47% of participants' performance exceeded the predetermined effect size (>or= 5 repetitions) for total repetitions, with 53% exceeding the effect size (>or= 2 repetitions) for repetitions in set 1 with caffeine (vs. placebo). In AC, 53% (total repetitions) and 47% (set 1 repetitions) of participants exceeded effect sizes with caffeine (vs. placebo), with only 13% experiencing decrements in performance (total repetitions). Aspirin also produced a higher PPI and RPE overall and in set 1 (vs. placebo). This study demonstrates that caffeine significantly enhanced resistance training performance in LE and AC, whereas aspirin did not. Athletes may improve their resistance training performance by acute ingestion of caffeine. As with most ergogenic aids, our analyses indicate that individual responses vary greatly.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.
Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem
Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683
Kalderstam, Jonas; Edén, Patrik; Bendahl, Pär-Ola; Strand, Carina; Fernö, Mårten; Ohlsson, Mattias
2013-06-01
The concordance index (c-index) is the standard way of evaluating the performance of prognostic models in the presence of censored data. Constructing prognostic models using artificial neural networks (ANNs) is commonly done by training on error functions which are modified versions of the c-index. Our objective was to demonstrate the capability of training directly on the c-index and to evaluate our approach compared to the Cox proportional hazards model. We constructed a prognostic model using an ensemble of ANNs which were trained using a genetic algorithm. The individual networks were trained on a non-linear artificial data set divided into a training and test set both of size 2000, where 50% of the data was censored. The ANNs were also trained on a data set consisting of 4042 patients treated for breast cancer spread over five different medical studies, 2/3 used for training and 1/3 used as a test set. A Cox model was also constructed on the same data in both cases. The two models' c-indices on the test sets were then compared. The ranking performance of the models is additionally presented visually using modified scatter plots. Cross validation on the cancer training set did not indicate any non-linear effects between the covariates. An ensemble of 30 ANNs with one hidden neuron was therefore used. The ANN model had almost the same c-index score as the Cox model (c-index=0.70 and 0.71, respectively) on the cancer test set. Both models identified similarly sized low risk groups with at most 10% false positives, 49 for the ANN model and 60 for the Cox model, but repeated bootstrap runs indicate that the difference was not significant. A significant difference could however be seen when applied on the non-linear synthetic data set. In that case the ANN ensemble managed to achieve a c-index score of 0.90 whereas the Cox model failed to distinguish itself from the random case (c-index=0.49). We have found empirical evidence that ensembles of ANN models can be optimized directly on the c-index. Comparison with a Cox model indicates that near identical performance is achieved on a real cancer data set while on a non-linear data set the ANN model is clearly superior. Copyright © 2013 Elsevier B.V. All rights reserved.
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
Comparing Pattern Recognition Feature Sets for Sorting Triples in the FIRST Database
NASA Astrophysics Data System (ADS)
Proctor, D. D.
2006-07-01
Pattern recognition techniques have been used with increasing success for coping with the tremendous amounts of data being generated by automated surveys. Usually this process involves construction of training sets, the typical examples of data with known classifications. Given a feature set, along with the training set, statistical methods can be employed to generate a classifier. The classifier is then applied to process the remaining data. Feature set selection, however, is still an issue. This paper presents techniques developed for accommodating data for which a substantive portion of the training set cannot be classified unambiguously, a typical case for low-resolution data. Significance tests on the sort-ordered, sample-size-normalized vote distribution of an ensemble of decision trees is introduced as a method of evaluating relative quality of feature sets. The technique is applied to comparing feature sets for sorting a particular radio galaxy morphology, bent-doubles, from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) database. Also examined are alternative functional forms for feature sets. Associated standard deviations provide the means to evaluate the effect of the number of folds, the number of classifiers per fold, and the sample size on the resulting classifications. The technique also may be applied to situations for which, although accurate classifications are available, the feature set is clearly inadequate, but is desired nonetheless to make the best of available information.
The influence of negative training set size on machine learning-based virtual screening.
Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J
2014-01-01
The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.
The influence of negative training set size on machine learning-based virtual screening
2014-01-01
Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867
Quantitative analysis of single- vs. multiple-set programs in resistance training.
Wolfe, Brian L; LeMura, Linda M; Cole, Phillip J
2004-02-01
The purpose of this study was to examine the existing research on single-set vs. multiple-set resistance training programs. Using the meta-analytic approach, we included studies that met the following criteria in our analysis: (a) at least 6 subjects per group; (b) subject groups consisting of single-set vs. multiple-set resistance training programs; (c) pretest and posttest strength measures; (d) training programs of 6 weeks or more; (e) apparently "healthy" individuals free from orthopedic limitations; and (f) published studies in English-language journals only. Sixteen studies generated 103 effect sizes (ESs) based on a total of 621 subjects, ranging in age from 15-71 years. Across all designs, intervention strategies, and categories, the pretest to posttest ES in muscular strength was (chi = 1.4 +/- 1.4; 95% confidence interval, 0.41-3.8; p < 0.001). The results of 2 x 2 analysis of variance revealed simple main effects for age, training status (trained vs. untrained), and research design (p < 0.001). No significant main effects were found for sex, program duration, and set end point. Significant interactions were found for training status and program duration (6-16 weeks vs. 17-40 weeks) and number of sets performed (single vs. multiple). The data indicated that trained individuals performing multiple sets generated significantly greater increases in strength (p < 0.001). For programs with an extended duration, multiple sets were superior to single sets (p < 0.05). This quantitative review indicates that single-set programs for an initial short training period in untrained individuals result in similar strength gains as multiple-set programs. However, as progression occurs and higher gains are desired, multiple-set programs are more effective.
Illumination estimation via thin-plate spline interpolation.
Shi, Lilong; Xiong, Weihua; Funt, Brian
2011-05-01
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
Extracting Dynamic Evidence Networks
2004-12-01
on the performance of the three models as a function of training set size, and on experiments showing the viability of using active learning techniques...potential relation instances, which include 28K actual relations. 2.3.2 Active Learning We also ran a set of experiments designed to explore the...viability of using active learning techniques to maximize the usefulness of the training data annotated for use by the system. The idea is to
Object Classification With Joint Projection and Low-Rank Dictionary Learning.
Foroughi, Homa; Ray, Nilanjan; Hong Zhang
2018-02-01
For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.
Children's emotion understanding: A meta-analysis of training studies.
Sprung, Manuel; Münch, Hannah M; Harris, Paul L; Ebesutani, Chad; Hofmann, Stefan G
2015-09-01
In the course of development, children show increased insight and understanding of emotions-both of their own emotions and those of others. However, little is known about the efficacy of training programs aimed at improving children's understanding of emotion. To conduct an effect size analysis of trainings aimed at three aspects of emotion understanding: external aspects (i.e., the recognition of emotional expressions, understanding external causes of emotion, understanding the influence of reminders on present emotions); mental aspects (i.e., understanding desire-based emotions, understanding belief-based emotions, understanding hidden emotions); and reflective aspects (i.e., understanding the regulation of an emotion, understanding mixed emotions, understanding moral emotions). A literature search was conducted using PubMed, PsycInfo, the Cochrane Library, and manual searches. The search identified 19 studies or experiments including a total of 749 children with an average age of 86 months ( S.D .=30.71) from seven different countries. Emotion understanding training procedures are effective for improving external (Hedge's g = 0.62), mental (Hedge's g = 0.31), and reflective (Hedge's g = 0.64) aspects of emotion understanding. These effect sizes were robust and generally unrelated to the number and lengths of training sessions, length of the training period, year of publication, and sample type. However, training setting and social setting moderated the effect of emotion understanding training on the understanding of external aspects of emotion. For the length of training session and social setting, we observed significant moderator effects of training on reflective aspects of emotion. Emotion understanding training may be a promising tool for both preventive intervention and the psychotherapeutic process. However, more well-controlled studies are needed.
Children’s emotion understanding: A meta-analysis of training studies
Sprung, Manuel; Münch, Hannah M.; Harris, Paul L.; Ebesutani, Chad; Hofmann, Stefan G.
2015-01-01
BACKGROUND In the course of development, children show increased insight and understanding of emotions—both of their own emotions and those of others. However, little is known about the efficacy of training programs aimed at improving children’s understanding of emotion. OBJECTIVES To conduct an effect size analysis of trainings aimed at three aspects of emotion understanding: external aspects (i.e., the recognition of emotional expressions, understanding external causes of emotion, understanding the influence of reminders on present emotions); mental aspects (i.e., understanding desire-based emotions, understanding belief-based emotions, understanding hidden emotions); and reflective aspects (i.e., understanding the regulation of an emotion, understanding mixed emotions, understanding moral emotions). DATA SOURCES A literature search was conducted using PubMed, PsycInfo, the Cochrane Library, and manual searches. REVIEW METHODS The search identified 19 studies or experiments including a total of 749 children with an average age of 86 months (S.D.=30.71) from seven different countries. RESULTS Emotion understanding training procedures are effective for improving external (Hedge’s g = 0.62), mental (Hedge’s g = 0.31), and reflective (Hedge’s g = 0.64) aspects of emotion understanding. These effect sizes were robust and generally unrelated to the number and lengths of training sessions, length of the training period, year of publication, and sample type. However, training setting and social setting moderated the effect of emotion understanding training on the understanding of external aspects of emotion. For the length of training session and social setting, we observed significant moderator effects of training on reflective aspects of emotion. CONCLUSION Emotion understanding training may be a promising tool for both preventive intervention and the psychotherapeutic process. However, more well-controlled studies are needed. PMID:26405369
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Assessing Predictive Properties of Genome-Wide Selection in Soybeans
Xavier, Alencar; Muir, William M.; Rainey, Katy Martin
2016-01-01
Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786
Food hygiene training in small to medium-sized care settings.
Seaman, Phillip; Eves, Anita
2008-10-01
Adoption of safe food handling practices is essential to effectively manage food safety. This study explores the impact of basic or foundation level food hygiene training on the attitudes and intentions of food handlers in care settings, using questionnaires based on the Theory of Planned Behaviour. Interviews were also conducted with food handlers and their managers to ascertain beliefs about the efficacy of, perceived barriers to, and relevance of food hygiene training. Most food handlers had undertaken formal food hygiene training; however, many who had not yet received training were preparing food, including high risk foods. Appropriate pre-training support and on-going supervision appeared to be lacking, thus limiting the effectiveness of training. Findings showed Subjective Norm to be the most significant influence on food handlers' intention to perform safe food handling practices, irrespective of training status, emphasising the role of important others in determining desirable behaviours.
Samson, Shazwani; Basri, Mahiran; Fard Masoumi, Hamid Reza; Abdul Malek, Emilia; Abedi Karjiban, Roghayeh
2016-01-01
A predictive model of a virgin coconut oil (VCO) nanoemulsion system for the topical delivery of copper peptide (an anti-aging compound) was developed using an artificial neural network (ANN) to investigate the factors that influence particle size. Four independent variables including the amount of VCO, Tween 80: Pluronic F68 (T80:PF68), xanthan gum and water were the inputs whereas particle size was taken as the response for the trained network. Genetic algorithms (GA) were used to model the data which were divided into training sets, testing sets and validation sets. The model obtained indicated the high quality performance of the neural network and its capability to identify the critical composition factors for the VCO nanoemulsion. The main factor controlling the particle size was found out to be xanthan gum (28.56%) followed by T80:PF68 (26.9%), VCO (22.8%) and water (21.74%). The formulation containing copper peptide was then successfully prepared using optimum conditions and particle sizes of 120.7 nm were obtained. The final formulation exhibited a zeta potential lower than -25 mV and showed good physical stability towards centrifugation test, freeze-thaw cycle test and storage at temperature 25°C and 45°C. PMID:27383135
Samson, Shazwani; Basri, Mahiran; Fard Masoumi, Hamid Reza; Abdul Malek, Emilia; Abedi Karjiban, Roghayeh
2016-01-01
A predictive model of a virgin coconut oil (VCO) nanoemulsion system for the topical delivery of copper peptide (an anti-aging compound) was developed using an artificial neural network (ANN) to investigate the factors that influence particle size. Four independent variables including the amount of VCO, Tween 80: Pluronic F68 (T80:PF68), xanthan gum and water were the inputs whereas particle size was taken as the response for the trained network. Genetic algorithms (GA) were used to model the data which were divided into training sets, testing sets and validation sets. The model obtained indicated the high quality performance of the neural network and its capability to identify the critical composition factors for the VCO nanoemulsion. The main factor controlling the particle size was found out to be xanthan gum (28.56%) followed by T80:PF68 (26.9%), VCO (22.8%) and water (21.74%). The formulation containing copper peptide was then successfully prepared using optimum conditions and particle sizes of 120.7 nm were obtained. The final formulation exhibited a zeta potential lower than -25 mV and showed good physical stability towards centrifugation test, freeze-thaw cycle test and storage at temperature 25°C and 45°C.
Creating Diverse Ensemble Classifiers to Reduce Supervision
2005-12-01
artificial examples. Quite often training with noise improves network generalization (Bishop, 1995; Raviv & Intrator, 1996). Adding noise to training...full training set, as seen by comparing to the to- tal dataset sizes. Hence, improving on the data utilization of DECORATE is a fairly difficult task...prohibitively expensive, except (perhaps) with an incremen- tal learner such as Naive Bayes. Our AFA framework is significantly more efficient because
BUMPER v1.0: a Bayesian user-friendly model for palaeo-environmental reconstruction
NASA Astrophysics Data System (ADS)
Holden, Philip B.; Birks, H. John B.; Brooks, Stephen J.; Bush, Mark B.; Hwang, Grace M.; Matthews-Bird, Frazer; Valencia, Bryan G.; van Woesik, Robert
2017-02-01
We describe the Bayesian user-friendly model for palaeo-environmental reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring ˜ 2 s to build a 100-taxon model from a 100-site training set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training sets under ideal assumptions. We then use these to demonstrate the sensitivity of reconstructions to the characteristics of the training set, considering assemblage richness, taxon tolerances, and the number of training sites. We find that a useful guideline for the size of a training set is to provide, on average, at least 10 samples of each taxon. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. An identically configured model is used in each application, the only change being the input files that provide the training-set environment and taxon-count data. The performance of BUMPER is shown to be comparable with weighted average partial least squares (WAPLS) in each case. Additional artificial datasets are constructed with similar characteristics to the real data, and these are used to explore the reasons for the differing performances of the different training sets.
Agile convolutional neural network for pulmonary nodule classification using CT images.
Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei
2018-04-01
To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.
Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J.
2018-01-01
It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future. PMID:29538331
Li, Hongjian; Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J
2018-03-14
It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future.
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James
2018-06-01
Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1993-01-01
A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.
ERIC Educational Resources Information Center
Devins, David; Johnson, Steve; Sutherland, John
2004-01-01
This paper examines a data set that has its origins in European Social Fund Objective 4 financed training programmes in small- to medium-sized enterprises (SMEs) in Britain to examine the extent to which three different personal development outcomes are attributable to different types of skills acquired during the training process. The three…
Machine learning enhanced optical distance sensor
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, N. A.
2018-01-01
Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of <0.8 mm and <2.2 mm, respectively. The test measurement error is at least a factor of 4 improvement over our prior sensor demonstration without the use of machine learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.
Lamont, Scott; Brunero, Scott
2018-05-19
Workplace violence prevalence has attracted significant attention within the international nursing literature. Little attention to non-mental health settings and a lack of evaluation rigor have been identified within review literature. To examine the effects of a workplace violence training program in relation to risk assessment and management practices, de-escalation skills, breakaway techniques, and confidence levels, within an acute hospital setting. A quasi-experimental study of nurses using pretest-posttest measurements of educational objectives and confidence levels, with two week follow-up. A 440 bed metropolitan tertiary referral hospital in Sydney, Australia. Nurses working in specialties identified as a 'high risk' for violence. A pre-post-test design was used with participants attending a one day workshop. The workshop evaluation comprised the use of two validated questionnaires: the Continuing Professional Development Reaction questionnaire, and the Confidence in Coping with Patient Aggression Instrument. Descriptive and inferential statistics were calculated. The paired t-test was used to assess the statistical significance of changes in the clinical behaviour intention and confidence scores from pre- to post-intervention. Cohen's d effect sizes were calculated to determine the extent of the significant results. Seventy-eight participants completed both pre- and post-workshop evaluation questionnaires. Statistically significant increases in behaviour intention scores were found in fourteen of the fifteen constructs relating to the three broad workshop objectives, and confidence ratings, with medium to large effect sizes observed in some constructs. A significant increase in overall confidence in coping with patient aggression was also found post-test with large effect size. Positive results were observed from the workplace violence training. Training needs to be complimented by a multi-faceted organisational approach which includes governance, quality and review processes. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Udofia, Nsikak-Abasi; Nlebem, Bernard S.
2013-01-01
This study was to validate training modules that can help provide requisite skills for Senior Secondary school students in plantain flour processing enterprises for self-employment and to enable them pass their examination. The study covered Rivers State. Purposive sampling technique was used to select a sample size of 205. Two sets of structured…
Neural Network Emulation of Reionization Simulations
NASA Astrophysics Data System (ADS)
Schmit, Claude J.; Pritchard, Jonathan R.
2018-05-01
Next generation radio experiments such as LOFAR, HERA and SKA are expected to probe the Epoch of Reionization and claim a first direct detection of the cosmic 21cm signal within the next decade. One of the major challenges for these experiments will be dealing with enormous incoming data volumes. Machine learning is key to increasing our data analysis efficiency. We consider the use of an artificial neural network to emulate 21cmFAST simulations and use it in a Bayesian parameter inference study. We then compare the network predictions to a direct evaluation of the EoR simulations and analyse the dependence of the results on the training set size. We find that the use of a training set of size 100 samples can recover the error contours of a full scale MCMC analysis which evaluates the model at each step.
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.
Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu
2016-04-19
Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
Single versus multiple sets of resistance exercise: a meta-regression.
Krieger, James W
2009-09-01
There has been considerable debate over the optimal number of sets per exercise to improve musculoskeletal strength during a resistance exercise program. The purpose of this study was to use hierarchical, random-effects meta-regression to compare the effects of single and multiple sets per exercise on dynamic strength. English-language studies comparing single with multiple sets per exercise, while controlling for other variables, were considered eligible for inclusion. The analysis comprised 92 effect sizes (ESs) nested within 30 treatment groups and 14 studies. Multiple sets were associated with a larger ES than a single set (difference = 0.26 +/- 0.05; confidence interval [CI]: 0.15, 0.37; p < 0.0001). In a dose-response model, 2 to 3 sets per exercise were associated with a significantly greater ES than 1 set (difference = 0.25 +/- 0.06; CI: 0.14, 0.37; p = 0.0001). There was no significant difference between 1 set per exercise and 4 to 6 sets per exercise (difference = 0.35 +/- 0.25; CI: -0.05, 0.74; p = 0.17) or between 2 to 3 sets per exercise and 4 to 6 sets per exercise (difference = 0.09 +/- 0.20; CI: -0.31, 0.50; p = 0.64). There were no interactions between set volume and training program duration, subject training status, or whether the upper or lower body was trained. Sensitivity analysis revealed no highly influential studies, and no evidence of publication bias was observed. In conclusion, 2 to 3 sets per exercise are associated with 46% greater strength gains than 1 set, in both trained and untrained subjects.
USDA-ARS?s Scientific Manuscript database
Large sets of genomic data are becoming available for cucumber (Cucumis sativus), yet there is no tool for whole genome genotyping. Creation of saturated genetic maps depends on development of good markers. The present cucumber genetic maps are based on several hundreds of markers. However they are ...
Roach, Kathryn E.
2011-01-01
Background Impaired walking limits function after spinal cord injury (SCI), but training-related improvements are possible even in people with chronic motor incomplete SCI. Objective The objective of this study was to compare changes in walking speed and distance associated with 4 locomotor training approaches. Design This study was a single-blind, randomized clinical trial. Setting This study was conducted in a rehabilitation research laboratory. Participants Participants were people with minimal walking function due to chronic SCI. Intervention Participants (n=74) trained 5 days per week for 12 weeks with the following approaches: treadmill-based training with manual assistance (TM), treadmill-based training with stimulation (TS), overground training with stimulation (OG), and treadmill-based training with robotic assistance (LR). Measurements Overground walking speed and distance were the primary outcome measures. Results In participants who completed the training (n=64), there were overall effects for speed (effect size index [d]=0.33) and distance (d=0.35). For speed, there were no significant between-group differences; however, distance gains were greatest with OG. Effect sizes for speed and distance were largest with OG (d=0.43 and d=0.40, respectively). Effect sizes for speed were the same for TM and TS (d=0.28); there was no effect for LR. The effect size for distance was greater with TS (d=0.16) than with TM or LR, for which there was no effect. Ten participants who improved with training were retested at least 6 months after training; walking speed at this time was slower than that at the conclusion of training but remained faster than before training. Limitations It is unknown whether the training dosage and the emphasis on training speed were optimal. Robotic training that requires active participation would likely yield different results. Conclusions In people with chronic motor incomplete SCI, walking speed improved with both overground training and treadmill-based training; however, walking distance improved to a greater extent with overground training. PMID:21051593
Springgate, Benjamin F; Wennerstrom, Ashley; Meyers, Diana; Allen, Charles E; Vannoy, Steven D; Bentham, Wayne; Wells, Kenneth B
2011-01-01
To describe a disaster recovery model focused on developing mental health services and capacity-building within a disparities-focused, community-academic participatory partnership framework. Community-based participatory, partnered training and services delivery intervention in a post-disaster setting. Post-Katrina Greater New Orleans community. More than 400 community providers from more than 70 health and social services agencies participated in the trainings. Partnered development of a training and services delivery program involving physicians, therapists, community health workers, and other clinical and non-clinical personnel to improve access and quality of care for mental health services in a post-disaster setting. Services delivery (outreach, education, screening, referral, direct treatment); training delivery; satisfaction and feedback related to training; partnered development of training products. Clinical services in the form of outreach, education, screening, referral and treatment were provided in excess of 110,000 service units. More than 400 trainees participated in training, and provided feedback that led to evolution of training curricula and training products, to meet evolving community needs over time. Participant satisfaction with training generally scored very highly. This paper describes a participatory, health-focused model of community recovery that began with addressing emerging, unmet mental health needs using a disparities-conscious partnership framework as one of the principle mechanisms for intervention. Population mental health needs were addressed by investment in infrastructure and services capacity among small and medium sized non-profit organizations working in disaster-impacted, low resource settings.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639
Effects of a Modified German Volume Training Program on Muscular Hypertrophy and Strength.
Amirthalingam, Theban; Mavros, Yorgi; Wilson, Guy C; Clarke, Jillian L; Mitchell, Lachlan; Hackett, Daniel A
2017-11-01
Amirthalingam, T, Mavros, Y, Wilson, GC, Clarke, JL, Mitchell, L, and Hackett, DA. Effects of a modified German volume training program on muscular hypertrophy and strength. J Strength Cond Res 31(11): 3109-3119, 2017-German Volume Training (GVT), or the 10 sets method, has been used for decades by weightlifters to increase muscle mass. To date, no study has directly examined the training adaptations after GVT. The purpose of this study was to investigate the effect of a modified GVT intervention on muscular hypertrophy and strength. Nineteen healthy men were randomly assign to 6 weeks of 10 or 5 sets of 10 repetitions for specific compound resistance exercises included in a split routine performed 3 times per week. Total and regional lean body mass, muscle thickness, and muscle strength were measured before and after the training program. Across groups, there were significant increases in lean body mass measures, however, greater increases in trunk (p = 0.043; effect size [ES] = -0.21) and arm (p = 0.083; ES = -0.25) lean body mass favored the 5-SET group. No significant increases were found for leg lean body mass or measures of muscle thickness across groups. Significant increases were found across groups for muscular strength, with greater increases in the 5-SET group for bench press (p = 0.014; ES = -0.43) and lat pull-down (p = 0.003; ES = -0.54). It seems that the modified GVT program is no more effective than performing 5 sets per exercise for increasing muscle hypertrophy and strength. To maximize hypertrophic training effects, it is recommended that 4-6 sets per exercise be performed, as it seems gains will plateau beyond this set range and may even regress due to overtraining.
Yao, Chen; Zhu, Xiaojin; Weigel, Kent A
2016-11-07
Genomic prediction for novel traits, which can be costly and labor-intensive to measure, is often hampered by low accuracy due to the limited size of the reference population. As an option to improve prediction accuracy, we introduced a semi-supervised learning strategy known as the self-training model, and applied this method to genomic prediction of residual feed intake (RFI) in dairy cattle. We describe a self-training model that is wrapped around a support vector machine (SVM) algorithm, which enables it to use data from animals with and without measured phenotypes. Initially, a SVM model was trained using data from 792 animals with measured RFI phenotypes. Then, the resulting SVM was used to generate self-trained phenotypes for 3000 animals for which RFI measurements were not available. Finally, the SVM model was re-trained using data from up to 3792 animals, including those with measured and self-trained RFI phenotypes. Incorporation of additional animals with self-trained phenotypes enhanced the accuracy of genomic predictions compared to that of predictions that were derived from the subset of animals with measured phenotypes. The optimal ratio of animals with self-trained phenotypes to animals with measured phenotypes (2.5, 2.0, and 1.8) and the maximum increase achieved in prediction accuracy measured as the correlation between predicted and actual RFI phenotypes (5.9, 4.1, and 2.4%) decreased as the size of the initial training set (300, 400, and 500 animals with measured phenotypes) increased. The optimal number of animals with self-trained phenotypes may be smaller when prediction accuracy is measured as the mean squared error rather than the correlation between predicted and actual RFI phenotypes. Our results demonstrate that semi-supervised learning models that incorporate self-trained phenotypes can achieve genomic prediction accuracies that are comparable to those obtained with models using larger training sets that include only animals with measured phenotypes. Semi-supervised learning can be helpful for genomic prediction of novel traits, such as RFI, for which the size of reference population is limited, in particular, when the animals to be predicted and the animals in the reference population originate from the same herd-environment.
Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.
Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong
2016-06-01
Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Porras-Alfaro, Andrea; Liu, Kuan-Liang; Kuske, Cheryl R; Xie, Gary
2014-02-01
We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5' section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets.
Liu, Kuan-Liang; Kuske, Cheryl R.
2014-01-01
We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5′ section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets. PMID:24242255
Asadi, Abbas; Ramírez-Campillo, Rodrigo
2016-01-01
The aim of this study was to compare the effects of 6-week cluster versus traditional plyometric training sets on jumping ability, sprint and agility performance. Thirteen college students were assigned to a cluster sets group (N=6) or traditional sets group (N=7). Both training groups completed the same training program. The traditional group completed five sets of 20 repetitions with 2min of rest between sets each session, while the cluster group completed five sets of 20 [2×10] repetitions with 30/90-s rest each session. Subjects were evaluated for countermovement jump (CMJ), standing long jump (SLJ), t test, 20-m and 40-m sprint test performance before and after the intervention. Both groups had similar improvements (P<0.05) in CMJ, SLJ, t test, 20-m, and 40-m sprint. However, the magnitude of improvement in CMJ, SLJ and t test was greater for the cluster group (effect size [ES]=1.24, 0.81 and 1.38, respectively) compared to the traditional group (ES=0.84, 0.60 and 0.55). Conversely, the magnitude of improvement in 20-m and 40-m sprint test was greater for the traditional group (ES=1.59 and 0.96, respectively) compared to the cluster group (ES=0.94 and 0.75, respectively). Although both plyometric training methods improved lower body maximal-intensity exercise performance, the traditional sets methods resulted in greater adaptations in sprint performance, while the cluster sets method resulted in greater jump and agility adaptations. Copyright © 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
Crop Identification Technology Assessment for Remote Sensing (CITARS)
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Cary, T. K.; Davis, B. J.; Swain, P. H.
1975-01-01
The results of classifications and experiments performed for the Crop Identification Technology Assessment for Remote Sensing (CITARS) project are summarized. Fifteen data sets were classified using two analysis procedures. One procedure used class weights while the other assumed equal probabilities of occurrence for all classes. In addition, 20 data sets were classified using training statistics from another segment or date. The results of both the local and non-local classifications in terms of classification and proportion estimation are presented. Several additional experiments are described which were performed to provide additional understanding of the CITARS results. These experiments investigated alternative analysis procedures, training set selection and size, effects of multitemporal registration, the spectral discriminability of corn, soybeans, and other, and analysis of aircraft multispectral data.
A longitudinal study of skeletal muscle following spinal cord injury and locomotor training.
Liu, M; Bose, P; Walter, G A; Thompson, F J; Vandenborne, K
2008-07-01
Experimental rat model of spinal cord contusion injury (contusion SCI). The objectives of this study were (1) to characterize the longitudinal changes in rat lower hindlimb muscle morphology following contusion SCI by using magnetic resonance imaging and (2) to determine the therapeutic potential of two types of locomotor training, treadmill and cycling. University research setting. After moderate midthoracic contusion SCI, Sprague-Dawley rats were assigned to either treadmill training, cycle training or an untrained group. Lower hindlimb muscle size was examined prior to SCI and at 1-, 2-, 4-, 8-, and 12-week post injury. Following contusion SCI, we observed significant atrophy in all rat hindlimb muscles with the posterior muscles (triceps surae and flexor digitorum) showing greater atrophy than the anterior muscles (tibialis anterior and extensor digitorum). The greatest amount of atrophy was measured at 2-week post injury (range from 11 to 26%), and spontaneous recovery in muscle size was observed by 4 weeks post-SCI. Both cycling and treadmill training halted the atrophic process and accelerated the rate of recovery. The therapeutic influence of both training interventions was observed within 1 week of training and no significant difference was noted between the two interventions, except in the tibialis anterior muscle. Finally, a positive correlation was found between locomotor functional scores and hindlimb muscle size following SCI. Both treadmill and cycle training diminish the extent of atrophy and facilitate muscle plasticity after contusion SCI.
Brockmeyer, Timo; Ingenerf, Katrin; Walther, Stephan; Wild, Beate; Hartmann, Mechthild; Herzog, Wolfgang; Bents, Hinrich; Friederich, Hans-Christoph
2014-01-01
Inefficient cognitive flexibility is considered a neurocognitive trait marker involved in the development and maintenance of anorexia nervosa (AN). Cognitive Remediation Therapy (CRT) is a specific treatment targeting this cognitive style. The aim of this study was to investigate the feasibility and efficacy (by estimating the effect size) of specifically tailored CRT for AN, compared to non-specific cognitive training. A prospective, randomized controlled, superiority pilot trial was conducted. Forty women with AN receiving treatment as usual (TAU) were randomized to receive either CRT or non-specific neurocognitive therapy (NNT) as an add-on. Both conditions comprised 30 sessions of computer-assisted (21 sessions) and face-to-face (9 sessions) training over a 3-week period. CRT focused specifically on cognitive flexibility. NNT was comprised of tasks designed to improve attention and memory. The primary outcome was performance on a neuropsychological post-treatment assessment of cognitive set-shifting. Data available from 25 treatment completers were analyzed. Participants in the CRT condition outperformed participants in the NNT condition in cognitive set-shifting at the end of the treatment (p = 0.027; between-groups effect size d = 0.62). Participants in both conditions showed high treatment acceptance. This study confirms the feasibility of CRT for AN, and provides a first estimate of the effect size that can be achieved using CRT for AN. Furthermore, the present findings corroborate that neurocognitive training for AN should be tailored to the specific cognitive inefficiencies of this patient group. Copyright © 2013 Wiley Periodicals, Inc.
Chang, Ken; Bai, Harrison X; Zhou, Hao; Su, Chang; Bi, Wenya Linda; Agbodza, Ena; Kavouridis, Vasileios K; Senders, Joeky T; Boaro, Alessandro; Beers, Andrew; Zhang, Biqi; Capellini, Alexandra; Liao, Weihua; Shen, Qin; Li, Xuejun; Xiao, Bo; Cryan, Jane; Ramkissoon, Shakti; Ramkissoon, Lori; Ligon, Keith; Wen, Patrick Y; Bindra, Ranjit S; Woo, John; Arnaout, Omar; Gerstner, Elizabeth R; Zhang, Paul J; Rosen, Bruce R; Yang, Li; Huang, Raymond Y; Kalpathy-Cramer, Jayashree
2018-03-01
Purpose: Isocitrate dehydrogenase ( IDH ) mutations in glioma patients confer longer survival and may guide treatment decision making. We aimed to predict the IDH status of gliomas from MR imaging by applying a residual convolutional neural network to preoperative radiographic data. Experimental Design: Preoperative imaging was acquired for 201 patients from the Hospital of University of Pennsylvania (HUP), 157 patients from Brigham and Women's Hospital (BWH), and 138 patients from The Cancer Imaging Archive (TCIA) and divided into training, validation, and testing sets. We trained a residual convolutional neural network for each MR sequence (FLAIR, T2, T1 precontrast, and T1 postcontrast) and built a predictive model from the outputs. To increase the size of the training set and prevent overfitting, we augmented the training set images by introducing random rotations, translations, flips, shearing, and zooming. Results: With our neural network model, we achieved IDH prediction accuracies of 82.8% (AUC = 0.90), 83.0% (AUC = 0.93), and 85.7% (AUC = 0.94) within training, validation, and testing sets, respectively. When age at diagnosis was incorporated into the model, the training, validation, and testing accuracies increased to 87.3% (AUC = 0.93), 87.6% (AUC = 0.95), and 89.1% (AUC = 0.95), respectively. Conclusions: We developed a deep learning technique to noninvasively predict IDH genotype in grade II-IV glioma using conventional MR imaging using a multi-institutional data set. Clin Cancer Res; 24(5); 1073-81. ©2017 AACR . ©2017 American Association for Cancer Research.
Many Molecular Properties from One Kernel in Chemical Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
We introduce property-independent kernels for machine learning modeling of arbitrarily many molecular properties. The kernels encode molecular structures for training sets of varying size, as well as similarity measures sufficiently diffuse in chemical space to sample over all training molecules. Corresponding molecular reference properties provided, they enable the instantaneous generation of ML models which can systematically be improved through the addition of more data. This idea is exemplified for single kernel based modeling of internal energy, enthalpy, free energy, heat capacity, polarizability, electronic spread, zero-point vibrational energy, energies of frontier orbitals, HOMOLUMO gap, and the highest fundamental vibrational wavenumber. Modelsmore » of these properties are trained and tested using 112 kilo organic molecules of similar size. Resulting models are discussed as well as the kernels’ use for generating and using other property models.« less
NASA Astrophysics Data System (ADS)
Esteves, Jose Manuel
2014-11-01
Although training is one of the most cited critical success factors in Enterprise Resource Planning (ERP) systems implementations, few empirical studies have attempted to examine the characteristics of management of the training process within ERP implementation projects. Based on the data gathered from a sample of 158 respondents across four stakeholder groups involved in ERP implementation projects, and using a mixed method design, we have assembled a derived set of training best practices. Results suggest that the categorised list of ERP training best practices can be used to better understand training activities in ERP implementation projects. Furthermore, the results reveal that the company size and location have an impact on the relevance of training best practices. This empirical study also highlights the need to investigate the role of informal workplace trainers in ERP training activities.
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Cary, T. K.; Davis, B. J.; Swain, P. H.
1975-01-01
The results of classifications and experiments for the crop identification technology assessment for remote sensing are summarized. Using two analysis procedures, 15 data sets were classified. One procedure used class weights while the other assumed equal probabilities of occurrence for all classes. Additionally, 20 data sets were classified using training statistics from another segment or date. The classification and proportion estimation results of the local and nonlocal classifications are reported. Data also describe several other experiments to provide additional understanding of the results of the crop identification technology assessment for remote sensing. These experiments investigated alternative analysis procedures, training set selection and size, effects of multitemporal registration, spectral discriminability of corn, soybeans, and other, and analyses of aircraft multispectral data.
Self locking drive system for rotating plug of a nuclear reactor
Brubaker, James E.
1979-01-01
This disclosure describes a self locking drive system for rotating the plugs on the head of a nuclear reactor which is able to restrain plug motion if a seismic event should occur during reactor refueling. A servomotor is engaged via a gear train and a bull gear to the plug. Connected to the gear train is a feedback control system which allows the motor to rotate the plug to predetermined locations for refueling of the reactor. The gear train contains a self locking double enveloping worm gear set. The worm gear set is utilized for its self locking nature to prevent unwanted rotation of the plugs as the result of an earthquake. The double enveloping type is used because its unique contour spreads the load across several teeth providing added strength and allowing the use of a conventional size worm.
Ranking procedure for partial discriminant analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, R.J.; Johnson, M.E.
1981-09-01
A rank procedure developed by Broffitt, Randles, and Hogg (1976) is modified to control the conditional probability of misclassification given that classification has been attempted. This modification leads to a useful solution to the two-population partial discriminant analysis problem for even moderately sized training sets.
Malacarne, D; Pesenti, R; Paolucci, M; Parodi, S
1993-01-01
For a database of 826 chemicals tested for carcinogenicity, we fragmented the structural formula of the chemicals into all possible contiguous-atom fragments with size between two and eight (nonhydrogen) atoms. The fragmentation was obtained using a new software program based on graph theory. We used 80% of the chemicals as a training set and 20% as a test set. The two sets were obtained by random sorting. From the training sets, an average (8 computer runs with independently sorted chemicals) of 315 different fragments were significantly (p < 0.125) associated with carcinogenicity or lack thereof. Even using this relatively low level of statistical significance, 23% of the molecules of the test sets lacked significant fragments. For 77% of the molecules of the test sets, we used the presence of significant fragments to predict carcinogenicity. The average level of accuracy of the predictions in the test sets was 67.5%. Chemicals containing only positive fragments were predicted with an accuracy of 78.7%. The level of accuracy was around 60% for chemicals characterized by contradictory fragments or only negative fragments. In a parallel manner, we performed eight paired runs in which carcinogenicity was attributed randomly to the molecules of the training sets. The fragments generated by these pseudo-training sets were devoid of any predictivity in the corresponding test sets. Using an independent software program, we confirmed (for the complex biological endpoint of carcinogenicity) the validity of a structure-activity relationship approach of the type proposed by Klopman and Rosenkranz with their CASE program. Images Figure 1. Figure 2. Figure 3. Figure 4. Figure 5. Figure 6. PMID:8275991
Kemeny, M Elizabeth; Mabry, J Beth
2017-01-01
Well-intentioned policy governing the training of direct care workers (DCWs) who serve older persons, in practice, may become merely a compliance issue for organizations rather than a meaningful way to improve quality of care. This study investigates the relationships between best practices in DCW training and the structure and culture of long term support service (LTSS) organizations. Using a mixed-methods approach to analyzing data from 328 licensed LTSS organizations in Pennsylvania, the findings suggest that public policy should address methods of training, not just content, and consider organizational variations in size, training evaluation practices, DCW integration, and DCW input into care planning. Effective training also incorporates support for organizations and supervisors as key aspects of DCWs' learning and working environment.
Modeling Electronic Quantum Transport with Machine Learning
Lopez Bezanilla, Alejandro; von Lilienfeld Toal, Otto A.
2014-06-11
We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system’s representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability inmore » dealing with transport problems of undulatory nature.« less
Nikolov, Nikolai G; Dybdahl, Marianne; Jónsdóttir, Svava Ó; Wedebye, Eva B
2014-11-01
Ionization is a key factor in hERG K(+) channel blocking, and acids and zwitterions are known to be less probable hERG blockers than bases and neutral compounds. However, a considerable number of acidic compounds block hERG, and the physico-chemical attributes which discriminate acidic blockers from acidic non-blockers have not been fully elucidated. We propose a rule for prediction of hERG blocking by acids and zwitterionic ampholytes based on thresholds for only three descriptors related to acidity, size and reactivity. The training set of 153 acids and zwitterionic ampholytes was predicted with a concordance of 91% by a decision tree based on the rule. Two external validations were performed with sets of 35 and 48 observations, respectively, both showing concordances of 91%. In addition, a global QSAR model of hERG blocking was constructed based on a large diverse training set of 1374 chemicals covering all ionization classes, externally validated showing high predictivity and compared to the decision tree. The decision tree was found to be superior for the acids and zwitterionic ampholytes classes. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.
A Repeated Power Training Enhances Fatigue Resistance While Reducing Intraset Fluctuations.
Gonzalo-Skok, Oliver; Tous-Fajardo, Julio; Moras, Gerard; Arjol-Serrano, José Luis; Mendez-Villanueva, Alberto
2018-04-04
Oliver, GS, Julio, TF, Moras, G, José Luis, AS, and Alberto, MV. A repeated power training enhances fatigue resistance while reducing intraset fluctuations. J Strength Cond Res XX(X): 000-000, 2018-The present study analyzed the effects of adding an upper-body repeated power ability (RPA) training to habitual strength training sessions. Twenty young elite male basketball players were randomly allocated into a control group (CON, n = 10) or repeated power group (RPG, n = 10) and evaluated by 1 repetition maximum (1RM), incremental load, and RPA tests in the bench press exercise before and after a 7-week period and a 4-week cessation period. Repeated power group performed 1-3 blocks of 5 sets of 5 repetitions using the load that maximized power output with 30 seconds and 3 minute of passive recovery between sets and blocks, respectively. Between-group analysis showed substantial greater improvements in RPG compared with CON in: best set (APB), last set (APL), mean power over 5 sets (APM), percentage of decrement, fluctuation decrease during APL and RPA index (APLpost/APBpre) during the RPA test (effect size [ES] = 0.64-1.86), and 1RM (ES = 0.48) and average power at 80% of 1RM (ES = 1.11) in the incremental load test. The improvements of APB and APM were almost perfectly correlated. In conclusion, RPA training represents an effective method to mainly improve fatigue resistance together with the novel finding of a better consistency in performance (measured as reduced intraset power fluctuations) at the end of a dynamic repeated effort.
Schoenfeld, Brad J; Ratamess, Nicholas A; Peterson, Mark D; Contreras, Bret; Sonmez, G T; Alvar, Brent A
2014-10-01
Regimented resistance training has been shown to promote marked increases in skeletal muscle mass. Although muscle hypertrophy can be attained through a wide range of resistance training programs, the principle of specificity, which states that adaptations are specific to the nature of the applied stimulus, dictates that some programs will promote greater hypertrophy than others. Research is lacking, however, as to the best combination of variables required to maximize hypertophic gains. The purpose of this study was to investigate muscular adaptations to a volume-equated bodybuilding-type training program vs. a powerlifting-type routine in well-trained subjects. Seventeen young men were randomly assigned to either a hypertrophy-type resistance training group that performed 3 sets of 10 repetition maximum (RM) with 90 seconds rest or a strength-type resistance training (ST) group that performed 7 sets of 3RM with a 3-minute rest interval. After 8 weeks, no significant differences were noted in muscle thickness of the biceps brachii. Significant strength differences were found in favor of ST for the 1RM bench press, and a trend was found for greater increases in the 1RM squat. In conclusion, this study showed that both bodybuilding- and powerlifting-type training promote similar increases in muscular size, but powerlifting-type training is superior for enhancing maximal strength.
NASA Astrophysics Data System (ADS)
Crosta, Giovanni Franco; Pan, Yong-Le; Aptowicz, Kevin B.; Casati, Caterina; Pinnick, Ronald G.; Chang, Richard K.; Videen, Gorden W.
2013-12-01
Measurement of two-dimensional angle-resolved optical scattering (TAOS) patterns is an attractive technique for detecting and characterizing micron-sized airborne particles. In general, the interpretation of these patterns and the retrieval of the particle refractive index, shape or size alone, are difficult problems. By reformulating the problem in statistical learning terms, a solution is proposed herewith: rather than identifying airborne particles from their scattering patterns, TAOS patterns themselves are classified through a learning machine, where feature extraction interacts with multivariate statistical analysis. Feature extraction relies on spectrum enhancement, which includes the discrete cosine FOURIER transform and non-linear operations. Multivariate statistical analysis includes computation of the principal components and supervised training, based on the maximization of a suitable figure of merit. All algorithms have been combined together to analyze TAOS patterns, organize feature vectors, design classification experiments, carry out supervised training, assign unknown patterns to classes, and fuse information from different training and recognition experiments. The algorithms have been tested on a data set with more than 3000 TAOS patterns. The parameters that control the algorithms at different stages have been allowed to vary within suitable bounds and are optimized to some extent. Classification has been targeted at discriminating aerosolized Bacillus subtilis particles, a simulant of anthrax, from atmospheric aerosol particles and interfering particles, like diesel soot. By assuming that all training and recognition patterns come from the respective reference materials only, the most satisfactory classification result corresponds to 20% false negatives from B. subtilis particles and <11% false positives from all other aerosol particles. The most effective operations have consisted of thresholding TAOS patterns in order to reject defective ones, and forming training sets from three or four pattern classes. The presented automated classification method may be adapted into a real-time operation technique, capable of detecting and characterizing micron-sized airborne particles.
Cancer diagnostics using neural network sorting of processed images
NASA Astrophysics Data System (ADS)
Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.
1996-03-01
A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.
Ganier, Franck; Hoareau, Charlotte; Tisseau, Jacques
2014-01-01
Virtual reality opens new opportunities for operator training in complex tasks. It lowers costs and has fewer constraints than traditional training. The ultimate goal of virtual training is to transfer knowledge gained in a virtual environment to an actual real-world setting. This study tested whether a maintenance procedure could be learnt equally well by virtual-environment and conventional training. Forty-two adults were divided into three equally sized groups: virtual training (GVT® [generic virtual training]), conventional training (using a real tank suspension and preparation station) and control (no training). Participants then performed the procedure individually in the real environment. Both training types (conventional and virtual) produced similar levels of performance when the procedure was carried out in real conditions. Performance level for the two trained groups was better in terms of success and time taken to complete the task, time spent consulting job instructions and number of times the instructor provided guidance.
Effective Bayesian Transfer Learning
2010-03-01
reasonable value of k , defined by the task B training set size. Transfer Regret 1 Regret = 100 * G AB B No Transfer With Transfer AB...a. REPORT U b . ABSTRACT U c. THIS PAGE U 19b. TELEPHONE NUMBER (Include area code) N/A Standard Form 298 (Rev. 8-98) Prescribed...rule set given the prior and developed staged approximate inference strategy, in which data from observed tasks 1 to k are used to infer general rule
van Wyk, Paula M; Weir, Patricia L; Andrews, David M
2015-01-01
A disconnect in manual patient transfer (MPT) training practices for nurses, between what is taught and used in academic and clinical settings, could have implications for injury. This study aimed to determine: 1. what MPTs student and staff nurses use in clinical settings, and 2. if the MPTs used most often were also the ones they perceived that they received training for and had the most confidence performing. Survey responses from student nurses (n=163) (mid-sized university) and staff nurses (n=33) (local hospital) regarding 19 MPTs were analyzed to determine which transfers were perceived to be used most often, and which ones they had received training for and had the greatest confidence performing. The MPTs nurses perceived using most often were the same transfers they had the greatest confidence performing and for which they perceived receiving training. However, these MPTs were not taught at the university at the time of this investigation. Reducing the disconnect between manual patient transfer training obtained in the academic and clinical environments will hopefully reduce the risk of injury for nurses and improve the quality of care for patients.
NASA Astrophysics Data System (ADS)
Yang, GuanYa; Wu, Jiang; Chen, ShuGuang; Zhou, WeiJun; Sun, Jian; Chen, GuanHua
2018-06-01
Neural network-based first-principles method for predicting heat of formation (HOF) was previously demonstrated to be able to achieve chemical accuracy in a broad spectrum of target molecules [L. H. Hu et al., J. Chem. Phys. 119, 11501 (2003)]. However, its accuracy deteriorates with the increase in molecular size. A closer inspection reveals a systematic correlation between the prediction error and the molecular size, which appears correctable by further statistical analysis, calling for a more sophisticated machine learning algorithm. Despite the apparent difference between simple and complex molecules, all the essential physical information is already present in a carefully selected set of small molecule representatives. A model that can capture the fundamental physics would be able to predict large and complex molecules from information extracted only from a small molecules database. To this end, a size-independent, multi-step multi-variable linear regression-neural network-B3LYP method is developed in this work, which successfully improves the overall prediction accuracy by training with smaller molecules only. And in particular, the calculation errors for larger molecules are drastically reduced to the same magnitudes as those of the smaller molecules. Specifically, the method is based on a 164-molecule database that consists of molecules made of hydrogen and carbon elements. 4 molecular descriptors were selected to encode molecule's characteristics, among which raw HOF calculated from B3LYP and the molecular size are also included. Upon the size-independent machine learning correction, the mean absolute deviation (MAD) of the B3LYP/6-311+G(3df,2p)-calculated HOF is reduced from 16.58 to 1.43 kcal/mol and from 17.33 to 1.69 kcal/mol for the training and testing sets (small molecules), respectively. Furthermore, the MAD of the testing set (large molecules) is reduced from 28.75 to 1.67 kcal/mol.
Different Muscle Action Training Protocols on Quadriceps-Hamstrings Neuromuscular Adaptations.
Ruas, Cassio V; Brown, Lee E; Lima, Camila D; Gregory Haff, G; Pinto, Ronei S
2018-05-01
The aim of this study was to compare three specific concentric and eccentric muscle action training protocols on quadriceps-hamstrings neuromuscular adaptations. Forty male volunteers performed 6 weeks of training (two sessions/week) of their dominant and non-dominant legs on an isokinetic dynamometer. They were randomly assigned to one of four groups; concentric quadriceps and concentric hamstrings (CON/CON, n=10), eccentric quadriceps and eccentric hamstrings (ECC/ECC, n=10), concentric quadriceps and eccentric hamstrings (CON/ECC, n=10), or no training (CTRL, n=10). Intensity of training was increased every week by decreasing the angular velocity for concentric and increasing it for eccentric groups in 30°/s increments. Volume of training was increased by adding one set every week. Dominant leg quadriceps and hamstrings muscle thickness, muscle quality, muscle activation, muscle coactivation, and electromechanical delay were tested before and after training. Results revealed that all training groups similarly increased MT of quadriceps and hamstrings compared to control (p<0.05). However, CON/ECC and ECC/ECC training elicited a greater magnitude of change. There were no significant differences between groups for all other neuromuscular variables (p>0.05). These findings suggest that different short-term muscle action isokinetic training protocols elicit similar muscle size increases in hamstrings and quadriceps, but not for other neuromuscular variables. Nevertheless, effect sizes indicate that CON/ECC and ECC/ECC may elicit the greatest magnitude of change in muscle hypertrophy. © Georg Thieme Verlag KG Stuttgart · New York.
Choice, numeracy, and physicians-in-training performance: the case of Medicare Part D.
Hanoch, Yaniv; Miron-Shatz, Talya; Cole, Helen; Himmelstein, Mary; Federman, Alex D
2010-07-01
In this study, we examined the effect of choice-set size and numeracy levels on a physician-in-training's ability to choose appropriate Medicare drug plans. Medical students and internal medicine residents (N = 100) were randomly assigned to 1 of 3 surveys, differing only in the number of plans to be evaluated (3, 10, and 20). After reviewing information about stand-alone Medicare prescription drug plans, participants answered questions about what plan they would advise 2 hypothetical patients to choose on the basis of a brief summary of the relevant concerns of each patient. Participants also completed an 11-item numeracy scale. Ability to answer correctly questions about hypothetical Medicare Part D insurance plans and numeracy levels. Consistent with our hypotheses, increases in choice sets correlated significantly with fewer correct answers, and higher numeracy levels were associated with more correct answers. Hence, our data further highlight the role of numeracy in financial- and health-related decision making, and also raise concerns about physicians' ability to help patients choose the optimal Part D plan. Our data indicate that even physicians-in-training perform more poorly when choice size is larger, thus raising concerns about the capacity of physicians-in-training to successfully navigate Medicare Part D and help their patients pick the best drug plan. Our results also illustrate the importance of numeracy in evaluating insurance-related information and the need for enhancing numeracy skills among medical students and physicians. PsycINFO Database Record (c) 2010 APA, all rights reserved
Corvids Outperform Pigeons and Primates in Learning a Basic Concept.
Wright, Anthony A; Magnotti, John F; Katz, Jeffrey S; Leonard, Kevin; Vernouillet, Alizée; Kelly, Debbie M
2017-04-01
Corvids (birds of the family Corvidae) display intelligent behavior previously ascribed only to primates, but such feats are not directly comparable across species. To make direct species comparisons, we used a same/different task in the laboratory to assess abstract-concept learning in black-billed magpies ( Pica hudsonia). Concept learning was tested with novel pictures after training. Concept learning improved with training-set size, and test accuracy eventually matched training accuracy-full concept learning-with a 128-picture set; this magpie performance was equivalent to that of Clark's nutcrackers (a species of corvid) and monkeys (rhesus, capuchin) and better than that of pigeons. Even with an initial 8-item picture set, both corvid species showed partial concept learning, outperforming both monkeys and pigeons. Similar corvid performance refutes the hypothesis that nutcrackers' prolific cache-location memory accounts for their superior concept learning, because magpies rely less on caching. That corvids with "primitive" neural architectures evolved to equal primates in full concept learning and even to outperform them on the initial 8-item picture test is a testament to the shared (convergent) survival importance of abstract-concept learning.
Use of a dementia training designed for nurse aides to train other staff.
Irvine, A Blair; Beaty, Jeff A; Seeley, John R; Bourgeois, Michelle
2013-12-01
Problematic resident behaviors may escalate in long-term care facilities (LTCs). If nurse aides (NAs) are not nearby, the nearest staff to intervene may be non-direct care workers (NDCWs), who have little or no dementia training. This pilot research tested Internet dementia-training program, designed for NAs, on NDCWs in a LTC setting. Sixty-eight NDCWs participated, filling out two baseline surveys at 1-month intervals and a posttest survey after training. The surveys included video-situation testing, items addressing psychosocial constructs associated with behavior change, and measures training-acceptance. Paired t tests showed significant positive effects on measures of knowledge, attitudes, self-efficacy, and behavioral intentions, with small-moderate effect sizes. Nursing staff as well as non-health care workers showed improved scores, and the web-site training program was well received by all participants. These results suggest that Internet training may allow staff development coordinators to conserve limited resources by cross-training of different job categories with the same program.
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
Magnus, C R A; Boychuk, K; Kim, S Y; Farthing, J P
2014-06-01
The purpose was to determine if an at-home resistance tubing strength training program on one shoulder (that is commonly used in rehabilitation settings) would produce increases in strength in the trained and untrained shoulders via cross-education. Twenty-three participants were randomized to TRAIN (strength-trained one shoulder; n = 13) or CONTROL (no intervention; n = 10). Strength training was completed at home using resistance tubing and consisted of maximal shoulder external rotation, internal rotation, scaption, retraction, and flexion 3 days/week for 4 weeks. Strength was measured via handheld dynamometry and muscle size measured via ultrasound. For external rotation strength, the trained (10.9 ± 10.9%) and untrained (12.7 ± 9.6%) arm of TRAIN was significantly different than CONTROL (1.6 ± 13.2%; -2.7 ± 12.3%; pooled across arm; P < 0.05). For internal rotation strength, the trained (14.8 ± 11.3%) and untrained (14.6 ± 10.1%) arm of TRAIN was significantly different than CONTROL (6.4 ± 11.2%; 5.1 ± 8.8%; pooled across arm; P < 0.05). There were no significant differences for scaption strength (P = 0.056). TRAIN significantly increased muscle size in the training arm of the supraspinatus (1.90 ± 0.32 to 1.99 ± 0.31 cm), and the anterior deltoid (1.08 ± 0.37 to 1.21 ± 0.39 cm; P < 0.05). This study suggests that an at-home resistance tubing training program on one limb can produce increases in strength in both limbs, and has implications for rehabilitation after unilateral shoulder injuries. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Toward Establishing Relationships between Essential and Higher Order Teaching Skills.
ERIC Educational Resources Information Center
Kromrey, Jeffrey D.; And Others
Nineteen secondary school teachers in a mid-sized Florida school district participated in a single-group pretest/posttest design to explore the relationship between essential and higher order teaching skills. Correlations between two sets of teacher performance variables were computed before and after training in teaching for higher order thinking…
Multicategory nets of single-layer perceptrons: complexity and sample-size issues.
Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras
2010-05-01
The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.
Testing of information condensation in a model reverberating spiking neural network.
Vidybida, Alexander
2011-06-01
Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/ periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.
Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?
Tajbakhsh, Nima; Shin, Jae Y; Gurudu, Suryakanth R; Hurst, R Todd; Kendall, Christopher B; Gotway, Michael B; Jianming Liang
2016-05-01
Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.
The Effects of Interset Rest on Adaptation to 7 Weeks of Explosive Training in Young Soccer Players
Ramirez-Campillo, Rodrigo; Andrade, David C.; Álvarez, Cristian; Henríquez-Olguín, Carlos; Martínez, Cristian; Báez-SanMartín, Eduardo; Silva-Urra, Juan; Burgos, Carlos; Izquierdo, Mikel
2014-01-01
The aim of the study was to compare the effects of plyometric training using 30, 60, or 120 s of rest between sets on explosive adaptations in young soccer players. Four groups of athletes (age 10.4 ± 2.3 y; soccer experience 3.3 ± 1.5 y) were randomly formed: control (CG; n = 15), plyometric training with 30 s (G30; n = 13), 60 s (G60; n = 14), and 120 s (G120; n = 12) of rest between training sets. Before and after intervention players were measured in jump ability, 20-m sprint time, change of direction speed (CODS), and kicking performance. The training program was applied during 7 weeks, 2 sessions per week, for a total of 840 jumps. After intervention the G30, G60 and G120 groups showed a significant (p = 0.0001 – 0.04) and small to moderate effect size (ES) improvement in the countermovement jump (ES = 0.49; 0.58; 0.55), 20 cm drop jump reactive strength index (ES = 0.81; 0.89; 0.86), CODS (ES = -1.03; -0.87; -1.04), and kicking performance (ES = 0.39; 0.49; 0.43), with no differences between treatments. The study shows that 30, 60, and 120 s of rest between sets ensure similar significant and small to moderate ES improvement in jump, CODS, and kicking performance during high-intensity short-term explosive training in young male soccer players. Key points Replacing some soccer drills by low volume high-intensity plyometric training would be beneficial in jumping, change of direction speed, and kicking ability in young soccer players. A rest period of 30, 60 or 120 seconds between low-volume high-intensity plyometric sets would induce significant and similar explosive adaptations during a short-term training period in young soccer players. Data from this research can be helpful for soccer trainers in choosing efficient drills and characteristics of between sets recovery programs to enhance performances in young male soccer players. PMID:24790481
Teixeira, Ana L; Falcao, Andre O
2014-07-28
Structurally similar molecules tend to have similar properties, i.e. closer molecules in the molecular space are more likely to yield similar property values while distant molecules are more likely to yield different values. Based on this principle, we propose the use of a new method that takes into account the high dimensionality of the molecular space, predicting chemical, physical, or biological properties based on the most similar compounds with measured properties. This methodology uses ordinary kriging coupled with three different molecular similarity approaches (based on molecular descriptors, fingerprints, and atom matching) which creates an interpolation map over the molecular space that is capable of predicting properties/activities for diverse chemical data sets. The proposed method was tested in two data sets of diverse chemical compounds collected from the literature and preprocessed. One of the data sets contained dihydrofolate reductase inhibition activity data, and the second molecules for which aqueous solubility was known. The overall predictive results using kriging for both data sets comply with the results obtained in the literature using typical QSPR/QSAR approaches. However, the procedure did not involve any type of descriptor selection or even minimal information about each problem, suggesting that this approach is directly applicable to a large spectrum of problems in QSAR/QSPR. Furthermore, the predictive results improve significantly with the similarity threshold between the training and testing compounds, allowing the definition of a confidence threshold of similarity and error estimation for each case inferred. The use of kriging for interpolation over the molecular metric space is independent of the training data set size, and no reparametrizations are necessary when more compounds are added or removed from the set, and increasing the size of the database will consequentially improve the quality of the estimations. Finally it is shown that this model can be used for checking the consistency of measured data and for guiding an extension of the training set by determining the regions of the molecular space for which new experimental measurements could be used to maximize the model's predictive performance.
Stensrud, Tonje Lauritzen; Gulbrandsen, Pål; Mjaaland, Trond Arne; Skretting, Sidsel; Finset, Arnstein
2014-04-01
To test a communication skills training program teaching general practitioners (GPs) a set of six evidence-based mental health related skills. A training program was developed and tested in a pilot test-retest study with 21 GPs. Consultations were videotaped and actors used as patients. A coding scheme was created to assess the effect of training on GP behavior. Relevant utterances were categorized as examples of each of the six specified skills. The GPs' self-perceived learning needs and self-efficacy were measured with questionnaires. The mean number of GP utterances related to the six skills increased from 13.3 (SD 6.2) utterances before to 23.6 (SD 7.2) utterances after training; an increase of 77.4% (P<0.001). Effect sizes varied from 0.23 to 1.37. Skills exploring emotions, cognitions and resources, and the skill Promote coping, increased significantly. Self-perceived learning needs and self-efficacy did not change significantly. The results from this pilot test are encouraging. GPs enhanced their use on four out of six mental health related communication skills significantly, and the effects were medium to large. This training approach appears to be an efficacious approach to mental health related communication skills training in general practice. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The grain-size lineup: A test of a novel eyewitness identification procedure.
Horry, Ruth; Brewer, Neil; Weber, Nathan
2016-04-01
When making a memorial judgment, respondents can regulate their accuracy by adjusting the precision, or grain size, of their responses. In many circumstances, coarse-grained responses are less informative, but more likely to be accurate, than fine-grained responses. This study describes a novel eyewitness identification procedure, the grain-size lineup, in which participants eliminated any number of individuals from the lineup, creating a choice set of variable size. A decision was considered to be fine-grained if no more than 1 individual was left in the choice set or coarse-grained if more than 1 individual was left in the choice set. Participants (N = 384) watched 2 high-quality or low-quality videotaped mock crimes and then completed 4 standard simultaneous lineups or 4 grain-size lineups (2 target-present and 2 target-absent). There was some evidence of strategic regulation of grain size, as the most difficult lineup was associated with a greater proportion of coarse-grained responses than the other lineups. However, the grain-size lineup did not outperform the standard simultaneous lineup. Fine-grained suspect identifications were no more diagnostic than suspect identifications from standard lineups, whereas coarse-grained suspect identifications carried little probative value. Participants were generally reluctant to provide coarse-grained responses, which may have hampered the utility of the procedure. For a grain-size approach to be useful, participants may need to be trained or instructed to use the coarse-grained option effectively. (c) 2016 APA, all rights reserved).
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
Self-regulated learning in simulation-based training: a systematic review and meta-analysis.
Brydges, Ryan; Manzone, Julian; Shanks, David; Hatala, Rose; Hamstra, Stanley J; Zendejas, Benjamin; Cook, David A
2015-04-01
Self-regulated learning (SRL) requires an active learner who has developed a set of processes for managing the achievement of learning goals. Simulation-based training is one context in which trainees can safely practise learning how to learn. The purpose of the present study was to evaluate, in the simulation-based training context, the effectiveness of interventions designed to support trainees in SRL activities. We used the social-cognitive model of SRL to guide a systematic review and meta-analysis exploring the links between instructor supervision, supports or scaffolds for SRL, and educational outcomes. We searched databases including MEDLINE and Scopus, and previous reviews, for material published until December 2011. Studies comparing simulation-based SRL interventions with another intervention for teaching health professionals were included. Reviewers worked independently and in duplicate to extract information on learners, study quality and educational outcomes. We used random-effects meta-analysis to compare the effects of supervision (instructor present or absent) and SRL educational supports (e.g. goal-setting study guides present or absent). From 11,064 articles, we included 32 studies enrolling 2482 trainees. Only eight of the 32 studies included educational supports for SRL. Compared with instructor-supervised interventions, unsupervised interventions were associated with poorer immediate post-test outcomes (pooled effect size: -0.34, p = 0.09; n = 19 studies) and negligible effects on delayed (i.e. > 1 week) retention tests (pooled effect size: 0.11, p = 0.63; n = 8 studies). Interventions including SRL supports were associated with small benefits compared with interventions without supports on both immediate post-tests (pooled effect size: 0.23, p = 0.22; n = 5 studies) and delayed retention tests (pooled effect size: 0.44, p = 0.067; n = 3 studies). Few studies in the simulation literature have designed SRL training to explicitly support trainees' capacity to self-regulate their learning. We recommend that educators and researchers shift from thinking about SRL as learning alone to thinking of SRL as comprising a shared responsibility between the trainee and the instructional designer (i.e. learning using designed supports that help prepare individuals for future learning). © 2015 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.
2012-01-01
An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to the phenology, solar-view geometry, and atmospheric condition etc. factors but not actual landcover difference. Finally, we will compare the classification results from screened and unscreened training samples to assess the improvement achieved by cleaning up the training samples. Keywords:
Naderi, S; Yin, T; König, S
2016-09-01
A simulation study was conducted to investigate the performance of random forest (RF) and genomic BLUP (GBLUP) for genomic predictions of binary disease traits based on cow calibration groups. Training and testing sets were modified in different scenarios according to disease incidence, the quantitative-genetic background of the trait (h(2)=0.30 and h(2)=0.10), and the genomic architecture [725 quantitative trait loci (QTL) and 290 QTL, populations with high and low levels of linkage disequilibrium (LD)]. For all scenarios, 10,005 SNP (depicting a low-density 10K SNP chip) and 50,025 SNP (depicting a 50K SNP chip) were evenly spaced along 29 chromosomes. Training and testing sets included 20,000 cows (4,000 sick, 16,000 healthy, disease incidence 20%) from the last 2 generations. Initially, 4,000 sick cows were assigned to the testing set, and the remaining 16,000 healthy cows represented the training set. In the ongoing allocation schemes, the number of sick cows in the training set increased stepwise by moving 10% of the sick animals from the testing set to the training set, and vice versa. The size of the training and testing sets was kept constant. Evaluation criteria for both GBLUP and RF were the correlations between genomic breeding values and true breeding values (prediction accuracy), and the area under the receiving operating characteristic curve (AUROC). Prediction accuracy and AUROC increased for both methods and all scenarios as increasing percentages of sick cows were allocated to the training set. Highest prediction accuracies were observed for disease incidences in training sets that reflected the population disease incidence of 0.20. For this allocation scheme, the largest prediction accuracies of 0.53 for RF and of 0.51 for GBLUP, and the largest AUROC of 0.66 for RF and of 0.64 for GBLUP, were achieved using 50,025 SNP, a heritability of 0.30, and 725 QTL. Heritability decreases from 0.30 to 0.10 and QTL reduction from 725 to 290 were associated with decreasing prediction accuracy and decreasing AUROC for all scenarios. This decrease was more pronounced for RF. Also, the increase of LD had stronger effect on RF results than on GBLUP results. The highest prediction accuracy from the low LD scenario was 0.30 from RF and 0.36 from GBLUP, and increased to 0.39 for both methods in the high LD population. Random forest successfully identified important SNP in close map distance to QTL explaining a high proportion of the phenotypic trait variations. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Corucci, Linda; Masini, Andrea; Cococcioni, Marco
2011-01-01
This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.
Toropova, Alla P; Toropov, Andrey A; Benfenati, Emilio; Puzyn, Tomasz; Leszczynska, Danuta; Leszczynski, Jerzy
2014-10-01
The development of quantitative structure-activity relationships for nanomaterials needs representation of molecular structure of extremely complex molecular systems. Obviously, various characteristics of nanomaterial could impact associated biochemical endpoints. Following features of TiO2 and ZnO nanoparticles (n=42) are considered here: (i) engineered size (nm); (ii) size in water suspension (nm); (iii) size in phosphate buffered saline (PBS, nm); (iv) concentration (mg/L); and (v) zeta potential (mV). The damage to cellular membranes (units/L) is selected as an endpoint. Quantitative features-activity relationships (QFARs) are calculated by the Monte Carlo technique for three distributions of data representing values associated with membrane damage into the training and validation sets. The obtained models are characterized by the following average statistics: 0.78
Transfer of management training from alternative perspectives.
Taylor, Paul J; Russ-Eft, Darlene F; Taylor, Hazel
2009-01-01
One hundred seven management training evaluations were meta-analyzed to compare effect sizes for the transfer of managerial training derived from different rating sources (self, superior, peer, and subordinate) and broken down by both study- and training-related variables. For studies as a whole, and interpersonal management skills training studies in particular, transfer effects based on trainees' self-ratings, and to a lesser extent ratings from their superiors, were largest and most varied across studies. In contrast, transfer effects based on peer ratings, and particularly subordinate ratings, were substantially smaller and more homogeneous. This pattern was consistent across different sources of studies, features of evaluation design, and within a subset of 14 studies that each included all 4 rating sources. Across most rating sources, transfer of training was greatest for studies conducted in nonmilitary settings, when raters were likely to have known whether the manager being rated had attended training, when criteria were targeted to training content, when training content was derived from an analysis of tasks and skill requirements, and when training included opportunities for practice. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Mapping soil landscape as spatial continua: The Neural Network Approach
NASA Astrophysics Data System (ADS)
Zhu, A.-Xing
2000-03-01
A neural network approach was developed to populate a soil similarity model that was designed to represent soil landscape as spatial continua for hydroecological modeling at watersheds of mesoscale size. The approach employs multilayer feed forward neural networks. The input to the network was data on a set of soil formative environmental factors; the output from the network was a set of similarity values to a set of prescribed soil classes. The network was trained using a conjugate gradient algorithm in combination with a simulated annealing technique to learn the relationships between a set of prescribed soils and their environmental factors. Once trained, the network was used to compute for every location in an area the similarity values of the soil to the set of prescribed soil classes. The similarity values were then used to produce detailed soil spatial information. The approach also included a Geographic Information System procedure for selecting representative training and testing samples and a process of determining the network internal structure. The approach was applied to soil mapping in a watershed, the Lubrecht Experimental Forest, in western Montana. The case study showed that the soil spatial information derived using the neural network approach reveals much greater spatial detail and has a higher quality than that derived from the conventional soil map. Implications of this detailed soil spatial information for hydroecological modeling at the watershed scale are also discussed.
Clinical Supervision of Athletic Training Students at Colleges and Universities Needs Improvement
Weidner, Thomas G.; Pipkin, Jennifer
2002-01-01
Objectives: To assess the type and amount of clinical supervision athletic training students received during clinical education. Design and Setting: An online survey was conducted with a questionnaire developed specifically for this study. Subjects: Head athletic trainers from National Collegiate Athletic Association Division I (28), Division II (34), and Division III institutions (30). Thirty-four represented Commission on the Accreditation of Allied Health Education Programs-accredited athletic training education programs, 20 represented athletic training programs in Joint Review Commission on Athletic Training candidacy, and 35 offered the internship route. Measurements: Descriptive statistics were computed. Three sets of chi-square analyses were completed to assess associations among athletic training students with first-responder qualifications, program and institution characteristics, certified athletic trainer medical coverage of moderate- and increased-risk sports, and clinical supervision. A trend analysis of students' class standing and time spent in different types of clinical supervision was also completed. The alpha level was set at < .05. Results: Most of the athletic training students (83.7%), particularly in accredited programs, had first-responder qualifications. More than half of the head athletic trainers (59.8%) indicated that athletic training students were authorized to provide medical care coverage without supervision. A minimal amount of medical care coverage of moderate- and increased-risk sports was unsupervised. No significant difference between the size of the education or athletic program and type and amount of clinical supervision was noted. Freshman athletic training students spent more time in direct clinical supervision and less time in unsupervised experience, but the opposite was true for senior students. Conclusions: Athletic training students are being utilized beyond appropriate clinical supervision and the scope of clinical education. Future research should employ methods using nonparticipant observation of clinical instructors' supervision of students as well as students' own perceptions of their clinical supervision. PMID:12937552
Electronic spectra from TDDFT and machine learning in chemical space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Raghunathan; Hartmann, Mia; Tapavicza, Enrico
Due to its favorable computational efficiency, time-dependent (TD) density functional theory (DFT) enables the prediction of electronic spectra in a high-throughput manner across chemical space. Its predictions, however, can be quite inaccurate. We resolve this issue with machine learning models trained on deviations of reference second-order approximate coupled-cluster (CC2) singles and doubles spectra from TDDFT counterparts, or even from DFT gap. We applied this approach to low-lying singlet-singlet vertical electronic spectra of over 20 000 synthetically feasible small organic molecules with up to eight CONF atoms. The prediction errors decay monotonously as a function of training set size. For amore » training set of 10 000 molecules, CC2 excitation energies can be reproduced to within +/- 0.1 eV for the remaining molecules. Analysis of our spectral database via chromophore counting suggests that even higher accuracies can be achieved. Based on the evidence collected, we discuss open challenges associated with data-driven modeling of high-lying spectra and transition intensities.« less
Gros, Daniel F; Szafranski, Derek D; Shead, Sarah D
2017-03-01
Dissemination and implementation of evidence-based psychotherapies is challenging in real world clinical settings. Transdiagnostic Behavior Therapy (TBT) for affective disorders was developed with dissemination and implementation in clinical settings in mind. The present study investigated a voluntary local dissemination and implementation effort, involving 28 providers participating in a four-hour training on TBT. Providers completed immediate (n=22) and six-month follow-up (n=12) training assessments and were encouraged to collect data on their TBT patients (delivery fidelity was not investigated). Findings demonstrated that providers endorsed learning of and interest in using TBT after the training. At six-months, 50% of providers reported using TBT with their patients and their perceived effectiveness of TBT to be very good to excellent. Submitted patient outcome data evidenced medium to large effect sizes. Together, these findings provide preliminary support for the effectiveness of a real world dissemination and implementation of TBT. Published by Elsevier Ltd.
Giordani, B; Novak, B; Sikorskii, A; Bangirana, P; Nakasujja, N; Winn, B M; Boivin, M J
2015-01-01
Valid, reliable, accessible, and cost-effective computer-training approaches can be important components in scaling up educational support across resource-poor settings, such as sub-Saharan Africa. The goal of the current study was to develop a computer-based training platform, the Michigan State University Games for Entertainment and Learning laboratory's Brain Powered Games (BPG) package that would be suitable for use with at-risk children within a rural Ugandan context and then complete an initial field trial of that package. After game development was completed with the use of local stimuli and sounds to match the context of the games as closely as possible to the rural Ugandan setting, an initial field study was completed with 33 children (mean age = 8.55 ± 2.29 years, range 6-12 years of age) with HIV in rural Uganda. The Test of Variables of Attention (TOVA), CogState computer battery, and the Non-Verbal Index from the Kaufman Assessment Battery for Children, 2nd edition (KABC-II) were chosen as the outcome measures for pre- and post-intervention testing. The children received approximately 45 min of BPG training several days per week for 2 months (24 sessions). Although some improvements in test scores were evident prior to BPG training, following training, children demonstrated clinically significant changes (significant repeated-measures outcomes with moderate to large effect sizes) on specific TOVA and CogState measures reflecting processing speed, attention, visual-motor coordination, maze learning, and problem solving. Results provide preliminary support for the acceptability, feasibility, and neurocognitive benefit of BPG and its utility as a model platform for computerized cognitive training in cross-cultural low-resource settings.
NASA Astrophysics Data System (ADS)
Mardirossian, Narbe; Head-Gordon, Martin
2015-02-01
A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.
SIBSHIP SIZE AND YOUNG WOMEN'S TRANSITIONS TO ADULTHOOD IN INDIA.
Santhya, K G; Zavier, A J Francis
2017-11-01
In India, a substantial proportion of young people are growing up in smaller families with fewer siblings than earlier generations of young people. Studies exploring the associations between declines in sibship size and young people's life experiences are limited. Drawing on data from a sub-nationally representative study conducted in 2006-08 of over 50,000 youths in India, this paper examines the associations between surviving sibship size and young women's (age 20-24) transitions to adulthood. Young women who reported no or a single surviving sibling were categorized as those with a small surviving sibship size, and those who reported two or more surviving siblings as those with a large surviving sibship size. Bivariate and multivariate regression analyses were conducted to ascertain the relationship between sibship size and outcome indicators. Analysis was also done separately for low- and high-fertility settings. Small sibship size tended to have a positive influence in many ways on young women's chances of making successful transitions to adulthood. Young women with fewer siblings were more likely than others to report secondary school completion, participation in vocational skills training programmes, experience of gender egalitarian socialization practices, adherence to gender egalitarian norms, exercise of pre-marital agency and small family size preferences. These associations were more apparent in low- than high-fertility settings.
Kim, D H; MacKinnon, T
2018-05-01
To identify the extent to which transfer learning from deep convolutional neural networks (CNNs), pre-trained on non-medical images, can be used for automated fracture detection on plain radiographs. The top layer of the Inception v3 network was re-trained using lateral wrist radiographs to produce a model for the classification of new studies as either "fracture" or "no fracture". The model was trained on a total of 11,112 images, after an eightfold data augmentation technique, from an initial set of 1,389 radiographs (695 "fracture" and 694 "no fracture"). The training data set was split 80:10:10 into training, validation, and test groups, respectively. An additional 100 wrist radiographs, comprising 50 "fracture" and 50 "no fracture" images, were used for final testing and statistical analysis. The area under the receiver operator characteristic curve (AUC) for this test was 0.954. Setting the diagnostic cut-off at a threshold designed to maximise both sensitivity and specificity resulted in values of 0.9 and 0.88, respectively. The AUC scores for this test were comparable to state-of-the-art providing proof of concept for transfer learning from CNNs in fracture detection on plain radiographs. This was achieved using only a moderate sample size. This technique is largely transferable, and therefore, has many potential applications in medical imaging, which may lead to significant improvements in workflow productivity and in clinical risk reduction. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Nishimoto, Atsuko; Kawakami, Michiyuki; Fujiwara, Toshiyuki; Hiramoto, Miho; Honaga, Kaoru; Abe, Kaoru; Mizuno, Katsuhiro; Ushiba, Junichi; Liu, Meigen
2018-01-10
Brain-machine interface training was developed for upper-extremity rehabilitation for patients with severe hemiparesis. Its clinical application, however, has been limited because of its lack of feasibility in real-world rehabilitation settings. We developed a new compact task-specific brain-machine interface system that enables task-specific training, including reach-and-grasp tasks, and studied its clinical feasibility and effectiveness for upper-extremity motor paralysis in patients with stroke. Prospective beforeâ€"after study. Twenty-six patients with severe chronic hemiparetic stroke. Participants were trained with the brain-machine interface system to pick up and release pegs during 40-min sessions and 40 min of standard occupational therapy per day for 10 days. Fugl-Meyer upper-extremity motor (FMA) and Motor Activity Log-14 amount of use (MAL-AOU) scores were assessed before and after the intervention. To test its feasibility, 4 occupational therapists who operated the system for the first time assessed it with the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST) 2.0. FMA and MAL-AOU scores improved significantly after brain-machine interface training, with the effect sizes being medium and large, respectively (p<0.01, d=0.55; p<0.01, d=0.88). QUEST effectiveness and safety scores showed feasibility and satisfaction in the clinical setting. Our newly developed compact brain-machine interface system is feasible for use in real-world clinical settings.
Awais, Muhammad; Palmerini, Luca; Bourke, Alan K.; Ihlen, Espen A. F.; Helbostad, Jorunn L.; Chiari, Lorenzo
2016-01-01
The popularity of using wearable inertial sensors for physical activity classification has dramatically increased in the last decade due to their versatility, low form factor, and low power requirements. Consequently, various systems have been developed to automatically classify daily life activities. However, the scope and implementation of such systems is limited to laboratory-based investigations. Furthermore, these systems are not directly comparable, due to the large diversity in their design (e.g., number of sensors, placement of sensors, data collection environments, data processing techniques, features set, classifiers, cross-validation methods). Hence, the aim of this study is to propose a fair and unbiased benchmark for the field-based validation of three existing systems, highlighting the gap between laboratory and real-life conditions. For this purpose, three representative state-of-the-art systems are chosen and implemented to classify the physical activities of twenty older subjects (76.4 ± 5.6 years). The performance in classifying four basic activities of daily life (sitting, standing, walking, and lying) is analyzed in controlled and free living conditions. To observe the performance of laboratory-based systems in field-based conditions, we trained the activity classification systems using data recorded in a laboratory environment and tested them in real-life conditions in the field. The findings show that the performance of all systems trained with data in the laboratory setting highly deteriorates when tested in real-life conditions, thus highlighting the need to train and test the classification systems in the real-life setting. Moreover, we tested the sensitivity of chosen systems to window size (from 1 s to 10 s) suggesting that overall accuracy decreases with increasing window size. Finally, to evaluate the impact of the number of sensors on the performance, chosen systems are modified considering only the sensing unit worn at the lower back. The results, similarly to the multi-sensor setup, indicate substantial degradation of the performance when laboratory-trained systems are tested in the real-life setting. This degradation is higher than in the multi-sensor setup. Still, the performance provided by the single-sensor approach, when trained and tested with real data, can be acceptable (with an accuracy above 80%). PMID:27973434
Balanced VS Imbalanced Training Data: Classifying Rapideye Data with Support Vector Machines
NASA Astrophysics Data System (ADS)
Ustuner, M.; Sanli, F. B.; Abdikan, S.
2016-06-01
The accuracy of supervised image classification is highly dependent upon several factors such as the design of training set (sample selection, composition, purity and size), resolution of input imagery and landscape heterogeneity. The design of training set is still a challenging issue since the sensitivity of classifier algorithm at learning stage is different for the same dataset. In this paper, the classification of RapidEye imagery with balanced and imbalanced training data for mapping the crop types was addressed. Classification with imbalanced training data may result in low accuracy in some scenarios. Support Vector Machines (SVM), Maximum Likelihood (ML) and Artificial Neural Network (ANN) classifications were implemented here to classify the data. For evaluating the influence of the balanced and imbalanced training data on image classification algorithms, three different training datasets were created. Two different balanced datasets which have 70 and 100 pixels for each class of interest and one imbalanced dataset in which each class has different number of pixels were used in classification stage. Results demonstrate that ML and NN classifications are affected by imbalanced training data in resulting a reduction in accuracy (from 90.94% to 85.94% for ML and from 91.56% to 88.44% for NN) while SVM is not affected significantly (from 94.38% to 94.69%) and slightly improved. Our results highlighted that SVM is proven to be a very robust, consistent and effective classifier as it can perform very well under balanced and imbalanced training data situations. Furthermore, the training stage should be precisely and carefully designed for the need of adopted classifier.
Generalization error analysis: deep convolutional neural network in mammography
NASA Astrophysics Data System (ADS)
Richter, Caleb D.; Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Cha, Kenny
2018-02-01
We conducted a study to gain understanding of the generalizability of deep convolutional neural networks (DCNNs) given their inherent capability to memorize data. We examined empirically a specific DCNN trained for classification of masses on mammograms. Using a data set of 2,454 lesions from 2,242 mammographic views, a DCNN was trained to classify masses into malignant and benign classes using transfer learning from ImageNet LSVRC-2010. We performed experiments with varying amounts of label corruption and types of pixel randomization to analyze the generalization error for the DCNN. Performance was evaluated using the area under the receiver operating characteristic curve (AUC) with an N-fold cross validation. Comparisons were made between the convergence times, the inference AUCs for both the training set and the test set of the original image patches without corruption, and the root-mean-squared difference (RMSD) in the layer weights of the DCNN trained with different amounts and methods of corruption. Our experiments observed trends which revealed that the DCNN overfitted by memorizing corrupted data. More importantly, this study improved our understanding of DCNN weight updates when learning new patterns or new labels. Although we used a specific classification task with the ImageNet as example, similar methods may be useful for analysis of the DCNN learning processes, especially those that employ transfer learning for medical image analysis where sample size is limited and overfitting risk is high.
Kubo, Keitaro; Kanehisa, Hiroaki; Fukunaga, Tetsuo
2002-01-01
The present study examined whether resistance and stretching training programmes altered the viscoelastic properties of human tendon structures in vivo. Eight subjects completed 8 weeks (4 days per week) of resistance training which consisted of unilateral plantar flexion at 70 % of one repetition maximum with 10 repetitions per set (5 sets per day). They performed resistance training (RT) on one side and resistance training and static stretching training (RST; 10 min per day, 7 days per week) on the other side. Before and after training, the elongation of the tendon structures in the medial gastrocnemius muscle was directly measured using ultrasonography, while the subjects performed ramp isometric plantar flexion up to the voluntary maximum, followed by a ramp relaxation. The relationship between estimated muscle force (Fm) and tendon elongation (L) was fitted to a linear regression, the slope of which was defined as stiffness. The hysteresis was calculated as the ratio of the area within the Fm-L loop to the area beneath the load portion of the curve. The stiffness increased significantly by 18.8 ± 10.4 % for RT and 15.3 ± 9.3 % for RST. There was no significant difference in the relative increase of stiffness between RT and RST. The hysteresis, on the other hand, decreased 17 ± 20 % for RST, but was unchanged for RT. These results suggested that the resistance training increased the stiffness of tendon structures as well as muscle strength and size, and the stretching training affected the viscosity of tendon structures but not the elasticity. PMID:11773330
NASA Technical Reports Server (NTRS)
Niebur, D.; Germond, A.
1993-01-01
This report investigates the classification of power system states using an artificial neural network model, Kohonen's self-organizing feature map. The ultimate goal of this classification is to assess power system static security in real-time. Kohonen's self-organizing feature map is an unsupervised neural network which maps N-dimensional input vectors to an array of M neurons. After learning, the synaptic weight vectors exhibit a topological organization which represents the relationship between the vectors of the training set. This learning is unsupervised, which means that the number and size of the classes are not specified beforehand. In the application developed in this report, the input vectors used as the training set are generated by off-line load-flow simulations. The learning algorithm and the results of the organization are discussed.
Classification of large-sized hyperspectral imagery using fast machine learning algorithms
NASA Astrophysics Data System (ADS)
Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira
2017-07-01
We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.
Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.
Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup
2017-05-31
Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.
Short-term adaptations following Complex Training in team-sports: A meta-analysis
Martinez-Rodriguez, Alejandro; Calleja-González, Julio; Alcaraz, Pedro E.
2017-01-01
Objective The purpose of this meta-analysis was to study the short-term adaptations on sprint and vertical jump (VJ) performance following Complex Training (CT) in team-sports. CT is a resistance training method aimed at developing both strength and power, which has a direct effect on sprint and VJ. It consists on alternating heavy resistance training exercises with plyometric/power ones, set for set, on the same workout. Methods A search of electronic databases up to July 2016 (PubMed-MEDLINE, SPORTDiscus, Web of Knowledge) was conducted. Inclusion criteria: 1) at least one CT intervention group; 2) training protocols ≥4-wks; 3) sample of team-sport players; 4) sprint or VJ as an outcome variable. Effect sizes (ES) of each intervention were calculated and subgroup analyses were performed. Results A total of 9 studies (13 CT groups) met the inclusion criteria. Medium effect sizes (ES) (ES = 0.73) were obtained for pre-post improvements in sprint, and small (ES = 0.41) in VJ, following CT. Experimental-groups presented better post-intervention sprint (ES = 1.01) and VJ (ES = 0.63) performance than control-groups. Sprint large ESs were exhibited in younger athletes (<20 years old; ES = 1.13); longer CT interventions (≥6 weeks; ES = 0.95); conditioning activities with intensities ≤85% 1RM (ES = 0.96) and protocols with frequencies of <3 sessions/week (ES = 0.84). Medium ESs were obtained in Division I players (ES = 0.76); training programs >12 total sessions (ES = 0.74). VJ Large ESs in programs with >12 total sessions (ES = 0.81). Medium ESs obtained for under-Division I individuals (ES = 0.56); protocols with intracomplex rest intervals ≥2 min (ES = 0.55); conditioning activities with intensities ≤85% 1RM (ES = 0.64); basketball/volleyball players (ES = 0.55). Small ESs were found for younger athletes (ES = 0.42); interventions ≥6 weeks (ES = 0.45). Conclusions CT interventions have positive medium effects on sprint performance and small effects on VJ in team-sport athletes. This training method is a suitable option to include in the season planning. PMID:28662108
Yasuda, Tomohiro; Fujita, Satoshi; Ogasawara, Riki; Sato, Yoshiaki; Abe, Takashi
2010-09-01
Single-joint resistance training with blood flow restriction (BFR) results in significant increases in arm or leg muscle size and single-joint strength. However, the effect of multijoint BFR training on both blood flow restricted limb and non-restricted trunk muscles remain poorly understood. To examine the impact of BFR bench press training on hypertrophic response to non-restricted (chest) and restricted (upper-arm) muscles and multi-joint strength, 10 young men were randomly divided into either BFR training (BFR-T) or non-BFR training (CON-T) groups. They performed 30% of one repetition maximal (1-RM) bench press exercise (four sets, total 75 reps) twice daily, 6 days week(-1) for 2 weeks. During the exercise session, subjects in the BFR-T group placed elastic cuffs proximally on both arms, with incremental increases in external compression starting at 100 mmHg and ending at 160 mmHg. Before and after the training, triceps brachii and pectoralis major muscle thickness (MTH), bench press 1-RM and serum anabolic hormones were measured. Two weeks of training led to a significant increase (P<0.05) in 1-RM bench press strength in BFR-T (6%) but not in CON-T (-2%). Triceps and pectoralis major MTH increased 8% and 16% (P<0.01), respectively, in BFR-T, but not in CON-T (-1% and 2%, respectively). There were no changes in baseline concentrations of anabolic hormones in either group. These results suggest that BFR bench press training leads to significant increases in muscle size for upper arm and chest muscles and 1-RM strength.
NASA Astrophysics Data System (ADS)
Forkert, Nils Daniel; Fiehler, Jens
2015-03-01
The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2015-02-21
A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less
Mardirossian, Narbe; Head-Gordon, Martin
2015-02-20
We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less
Rank Order Entropy: why one metric is not enough
McLellan, Margaret R.; Ryan, M. Dominic; Breneman, Curt M.
2011-01-01
The use of Quantitative Structure-Activity Relationship models to address problems in drug discovery has a mixed history, generally resulting from the mis-application of QSAR models that were either poorly constructed or used outside of their domains of applicability. This situation has motivated the development of a variety of model performance metrics (r2, PRESS r2, F-tests, etc) designed to increase user confidence in the validity of QSAR predictions. In a typical workflow scenario, QSAR models are created and validated on training sets of molecules using metrics such as Leave-One-Out or many-fold cross-validation methods that attempt to assess their internal consistency. However, few current validation methods are designed to directly address the stability of QSAR predictions in response to changes in the information content of the training set. Since the main purpose of QSAR is to quickly and accurately estimate a property of interest for an untested set of molecules, it makes sense to have a means at hand to correctly set user expectations of model performance. In fact, the numerical value of a molecular prediction is often less important to the end user than knowing the rank order of that set of molecules according to their predicted endpoint values. Consequently, a means for characterizing the stability of predicted rank order is an important component of predictive QSAR. Unfortunately, none of the many validation metrics currently available directly measure the stability of rank order prediction, making the development of an additional metric that can quantify model stability a high priority. To address this need, this work examines the stabilities of QSAR rank order models created from representative data sets, descriptor sets, and modeling methods that were then assessed using Kendall Tau as a rank order metric, upon which the Shannon Entropy was evaluated as a means of quantifying rank-order stability. Random removal of data from the training set, also known as Data Truncation Analysis (DTA), was used as a means for systematically reducing the information content of each training set while examining both rank order performance and rank order stability in the face of training set data loss. The premise for DTA ROE model evaluation is that the response of a model to incremental loss of training information will be indicative of the quality and sufficiency of its training set, learning method, and descriptor types to cover a particular domain of applicability. This process is termed a “rank order entropy” evaluation, or ROE. By analogy with information theory, an unstable rank order model displays a high level of implicit entropy, while a QSAR rank order model which remains nearly unchanged during training set reductions would show low entropy. In this work, the ROE metric was applied to 71 data sets of different sizes, and was found to reveal more information about the behavior of the models than traditional metrics alone. Stable, or consistently performing models, did not necessarily predict rank order well. Models that performed well in rank order did not necessarily perform well in traditional metrics. In the end, it was shown that ROE metrics suggested that some QSAR models that are typically used should be discarded. ROE evaluation helps to discern which combinations of data set, descriptor set, and modeling methods lead to usable models in prioritization schemes, and provides confidence in the use of a particular model within a specific domain of applicability. PMID:21875058
Inverse analysis of turbidites by machine learning
NASA Astrophysics Data System (ADS)
Naruse, H.; Nakao, K.
2017-12-01
This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small deviation from the true parameters. Comparing to previous inverse modeling of turbidity currents, our methodology is superior especially in the efficiency of computation. Also, our methodology has advantage in extensibility and applicability to various sediment transport processes such as pyroclastic flows or debris flows.
Veturi, Yogasudha; Ritchie, Marylyn D
2018-01-01
Transcriptome-wide association studies (TWAS) have recently been employed as an approach that can draw upon the advantages of genome-wide association studies (GWAS) and gene expression studies to identify genes associated with complex traits. Unlike standard GWAS, summary level data suffices for TWAS and offers improved statistical power. Two popular TWAS methods include either (a) imputing the cis genetic component of gene expression from smaller sized studies (using multi-SNP prediction or MP) into much larger effective sample sizes afforded by GWAS - TWAS-MP or (b) using summary-based Mendelian randomization - TWAS-SMR. Although these methods have been effective at detecting functional variants, it remains unclear how extensive variability in the genetic architecture of complex traits and diseases impacts TWAS results. Our goal was to investigate the different scenarios under which these methods yielded enough power to detect significant expression-trait associations. In this study, we conducted extensive simulations based on 6000 randomly chosen, unrelated Caucasian males from Geisinger's MyCode population to compare the power to detect cis expression-trait associations (within 500 kb of a gene) using the above-described approaches. To test TWAS across varying genetic backgrounds we simulated gene expression and phenotype using different quantitative trait loci per gene and cis-expression /trait heritability under genetic models that differentiate the effect of causality from that of pleiotropy. For each gene, on a training set ranging from 100 to 1000 individuals, we either (a) estimated regression coefficients with gene expression as the response using five different methods: LASSO, elastic net, Bayesian LASSO, Bayesian spike-slab, and Bayesian ridge regression or (b) performed eQTL analysis. We then sampled with replacement 50,000, 150,000, and 300,000 individuals respectively from the testing set of the remaining 5000 individuals and conducted GWAS on each set. Subsequently, we integrated the GWAS summary statistics derived from the testing set with the weights (or eQTLs) derived from the training set to identify expression-trait associations using (a) TWAS-MP (b) TWAS-SMR (c) eQTL-based GWAS, or (d) standalone GWAS. Finally, we examined the power to detect functionally relevant genes using the different approaches under the considered simulation scenarios. In general, we observed great similarities among TWAS-MP methods although the Bayesian methods resulted in improved power in comparison to LASSO and elastic net as the trait architecture grew more complex while training sample sizes and expression heritability remained small. Finally, we observed high power under causality but very low to moderate power under pleiotropy.
Behm, David G; Faigenbaum, Avery D; Falk, Baraket; Klentrou, Panagiota
2008-06-01
Many position stands and review papers have refuted the myths associated with resistance training (RT) in children and adolescents. With proper training methods, RT for children and adolescents can be relatively safe and improve overall health. The objective of this position paper and review is to highlight research and provide recommendations in aspects of RT that have not been extensively reported in the pediatric literature. In addition to the well-documented increases in muscular strength and endurance, RT has been used to improve function in pediatric patients with cystic fibrosis and cerebral palsy, as well as pediatric burn victims. Increases in children's muscular strength have been attributed primarily to neurological adaptations due to the disproportionately higher increase in muscle strength than in muscle size. Although most studies using anthropometric measures have not shown significant muscle hypertrophy in children, more sensitive measures such as magnetic resonance imaging and ultrasound have suggested hypertrophy may occur. There is no minimum age for RT for children. However, the training and instruction must be appropriate for children and adolescents, involving a proper warm-up, cool-down, and appropriate choice of exercises. It is recommended that low- to moderate-intensity resistance exercise should be done 2-3 times/week on non-consecutive days, with 1-2 sets initially, progressing to 4 sets of 8-15 repetitions for 8-12 exercises. These exercises can include more advanced movements such as Olympic-style lifting, plyometrics, and balance training, which can enhance strength, power, co-ordination, and balance. However, specific guidelines for these more advanced techniques need to be established for youth. In conclusion, an RT program that is within a child's or adolescent's capacity and involves gradual progression under qualified instruction and supervision with appropriately sized equipment can involve more advanced or intense RT exercises, which can lead to functional (i.e., muscular strength, endurance, power, balance, and co-ordination) and health benefits.
NASA Astrophysics Data System (ADS)
Schmit, C. J.; Pritchard, J. R.
2018-03-01
Next generation radio experiments such as LOFAR, HERA, and SKA are expected to probe the Epoch of Reionization (EoR) and claim a first direct detection of the cosmic 21cm signal within the next decade. Data volumes will be enormous and can thus potentially revolutionize our understanding of the early Universe and galaxy formation. However, numerical modelling of the EoR can be prohibitively expensive for Bayesian parameter inference and how to optimally extract information from incoming data is currently unclear. Emulation techniques for fast model evaluations have recently been proposed as a way to bypass costly simulations. We consider the use of artificial neural networks as a blind emulation technique. We study the impact of training duration and training set size on the quality of the network prediction and the resulting best-fitting values of a parameter search. A direct comparison is drawn between our emulation technique and an equivalent analysis using 21CMMC. We find good predictive capabilities of our network using training sets of as low as 100 model evaluations, which is within the capabilities of fully numerical radiative transfer codes.
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
Lavado Contador, J F; Maneta, M; Schnabel, S
2006-10-01
The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.
SVM2Motif—Reconstructing Overlapping DNA Sequence Motifs by Mimicking an SVM Predictor
Vidovic, Marina M. -C.; Görnitz, Nico; Müller, Klaus-Robert; Rätsch, Gunnar; Kloft, Marius
2015-01-01
Identifying discriminative motifs underlying the functionality and evolution of organisms is a major challenge in computational biology. Machine learning approaches such as support vector machines (SVMs) achieve state-of-the-art performances in genomic discrimination tasks, but—due to its black-box character—motifs underlying its decision function are largely unknown. As a remedy, positional oligomer importance matrices (POIMs) allow us to visualize the significance of position-specific subsequences. Although being a major step towards the explanation of trained SVM models, they suffer from the fact that their size grows exponentially in the length of the motif, which renders their manual inspection feasible only for comparably small motif sizes, typically k ≤ 5. In this work, we extend the work on positional oligomer importance matrices, by presenting a new machine-learning methodology, entitled motifPOIM, to extract the truly relevant motifs—regardless of their length and complexity—underlying the predictions of a trained SVM model. Our framework thereby considers the motifs as free parameters in a probabilistic model, a task which can be phrased as a non-convex optimization problem. The exponential dependence of the POIM size on the oligomer length poses a major numerical challenge, which we address by an efficient optimization framework that allows us to find possibly overlapping motifs consisting of up to hundreds of nucleotides. We demonstrate the efficacy of our approach on a synthetic data set as well as a real-world human splice site data set. PMID:26690911
Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun
2017-02-01
An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.
An empirical study of race times in recreational endurance runners.
Vickers, Andrew J; Vertosick, Emily A
2016-01-01
Studies of endurance running have typically involved elite athletes, small sample sizes and measures that require special expertise or equipment. We examined factors associated with race performance and explored methods for race time prediction using information routinely available to a recreational runner. An Internet survey was used to collect data from recreational endurance runners (N = 2303). The cohort was split 2:1 into a training set and validation set to create models to predict race time. Sex, age, BMI and race training were associated with mean race velocity for all race distances. The difference in velocity between males and females decreased with increasing distance. Tempo runs were more strongly associated with velocity for shorter distances, while typical weekly training mileage and interval training had similar associations with velocity for all race distances. The commonly used Riegel formula for race time prediction was well-calibrated for races up to a half-marathon, but dramatically underestimated marathon time, giving times at least 10 min too fast for half of runners. We built two models to predict marathon time. The mean squared error for Riegel was 381 compared to 228 (model based on one prior race) and 208 (model based on two prior races). Our findings can be used to inform race training and to provide more accurate race time predictions for better pacing.
On the statistical assessment of classifiers using DNA microarray data
Ancona, N; Maglietta, R; Piepoli, A; D'Addabbo, A; Cotugno, R; Savino, M; Liuni, S; Carella, M; Pesole, G; Perri, F
2006-01-01
Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22) and tumor (25) specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA) classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045) as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS) and Support Vector Machines (SVM) classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035) and e = 18% (p = 0.037) respectively. Moreover, the error rate decreases as the training set size increases, reaching its best performances with 35 training examples. In this case, RLS and SVM have error rates of e = 14% (p = 0.027) and e = 11% (p = 0.019). Concerning the number of genes, we found about 6000 genes (p < 0.05) correlated with the pathology, resulting from the signal-to-noise statistic. Moreover the performances of RLS and SVM classifiers do not change when 74% of genes is used. They progressively reduce up to e = 16% (p < 0.05) when only 2 genes are employed. The biological relevance of a set of genes determined by our statistical analysis and the major roles they play in colorectal tumorigenesis is discussed. Conclusions The method proposed provides statistically significant answers to precise questions relevant for the diagnosis and prognosis of cancer. We found that, with as few as 15 examples, it is possible to train statistically significant classifiers for colon cancer diagnosis. As for the definition of the number of genes sufficient for a reliable classification of colon cancer, our results suggest that it depends on the accuracy required. PMID:16919171
Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger
2016-05-01
We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.
Rudolph, Ivonne; Schmidt, Thorsten; Wozniak, Tobias; Kubin, Thomas; Ruetters, Dana; Huebner, Jutta
2018-04-01
Physical activity has positive effects on cancer patients. Dancing addresses diverse bio-psycho-social aspects. Our aim was to assess the evidence on ballroom dancing and to develop the setting for a pilot project. We performed a systematic review, extracted the data and designed a pilot training based on standard curricula. We included cancer patients during or after therapy. Training duration was 90 min with one regular pause and individual pauses as needed. We retrieved two systematic reviews and six controlled studies. Types of dancing varied. Only one study used ballroom dancing. Dance training might improve well-being, physical fitness, fatigue and coping during and after therapy. Yet, evidence is scarce and data to derive the effect size are lacking; 27 patients and their partners took part in the pilot training. Patients and partners needed more time to learn the steps than is planned in regular ballroom classes. Participants were very satisfied with the adaptation of the training to their physical strength and estimated the training in a sheltered group. No side effects occurred. In spite of a high rate of participants reporting fatigue, 90 min of physical activity with only a few minutes of rest were manageable for all participants. Ballroom dancing may offer benefits for patients with respect to quality of life. Cancer patients prefer sheltered training setting and curricula of regular ballroom classes must be adapted for cancer patients. Strict curricula might reduce motivation and adherence and exclude patients with lower or variable fitness.
Protein contact prediction using patterns of correlation.
Hamilton, Nicholas; Burrage, Kevin; Ragan, Mark A; Huber, Thomas
2004-09-01
We describe a new method for using neural networks to predict residue contact pairs in a protein. The main inputs to the neural network are a set of 25 measures of correlated mutation between all pairs of residues in two "windows" of size 5 centered on the residues of interest. While the individual pair-wise correlations are a relatively weak predictor of contact, by training the network on windows of correlation the accuracy of prediction is significantly improved. The neural network is trained on a set of 100 proteins and then tested on a disjoint set of 1033 proteins of known structure. An average predictive accuracy of 21.7% is obtained taking the best L/2 predictions for each protein, where L is the sequence length. Taking the best L/10 predictions gives an average accuracy of 30.7%. The predictor is also tested on a set of 59 proteins from the CASP5 experiment. The accuracy is found to be relatively consistent across different sequence lengths, but to vary widely according to the secondary structure. Predictive accuracy is also found to improve by using multiple sequence alignments containing many sequences to calculate the correlations. Copyright 2004 Wiley-Liss, Inc.
Computer Based Training: Field Deployable Trainer and Shared Virtual Reality
NASA Technical Reports Server (NTRS)
Mullen, Terence J.
1997-01-01
Astronaut training has traditionally been conducted at specific sites with specialized facilities. Because of its size and nature the training equipment is generally not portable. Efforts are now under way to develop training tools that can be taken to remote locations, including into orbit. Two of these efforts are the Field Deployable Trainer and Shared Virtual Reality projects. Field Deployable Trainer NASA has used the recent shuttle mission by astronaut Shannon Lucid to the Russian space station, Mir, as an opportunity to develop and test a prototype of an on-orbit computer training system. A laptop computer with a customized user interface, a set of specially prepared CD's, and video tapes were taken to the Mir by Ms. Lucid. Based upon the feedback following the launch of the Lucid flight, our team prepared materials for the next Mir visitor. Astronaut John Blaha will fly on NASA/MIR Long Duration Mission 3, set to launch in mid September. He will take with him a customized hard disk drive and a package of compact disks containing training videos, references and maps. The FDT team continues to explore and develop new and innovative ways to conduct offsite astronaut training using personal computers. Shared Virtual Reality Training NASA's Space Flight Training Division has been investigating the use of virtual reality environments for astronaut training. Recent efforts have focused on activities requiring interaction by two or more people, called shared VR. Dr. Bowen Loftin, from the University of Houston, directs a virtual reality laboratory that conducts much of the NASA sponsored research. I worked on a project involving the development of a virtual environment that can be used to train astronauts and others to operate a science unit called a Biological Technology Facility (BTF). Facilities like this will be used to house and control microgravity experiments on the space station. It is hoped that astronauts and instructors will ultimately be able to share common virtual environments and, using telephone links, conduct interactive training from separate locations.
NASA Astrophysics Data System (ADS)
Podder, M. S.; Majumder, C. B.
2017-11-01
An artificial neural network (ANN) model was developed to predict the phycoremediation efficiency of Chlorella pyrenoidosa for the removal of both As(III) and As(V) from synthetic wastewater based on 49 data-sets obtained from experimental study and increased the data using CSCF technique. The data were divided into training (60%) validation (20%) and testing (20%) sets. The data collected was used for training a three-layer feed-forward back propagation (BP) learning algorithm having 4-5-1 architecture. The model used tangent sigmoid transfer function at input to hidden layer ( tansing) while a linear transfer function ( purelin) was used at output layer. Comparison between experimental results and model results gave a high correlation coefficient (R allANN 2 equal to 0.99987 for both ions and exhibited that the model was able to predict the phycoremediation of As(III) and As(V) from wastewater. Experimental parameters influencing phycoremediation process like pH, inoculum size, contact time and initial arsenic concentration [either As(III) or As(V)] were investigated. A contact time of 168 h was mainly required for achieving equilibrium at pH 9.0 with an inoculum size of 10% (v/v). At optimum conditions, metal ion uptake enhanced with increasing initial metal ion concentration.
Statistical evaluation of synchronous spike patterns extracted by frequent item set mining
Torre, Emiliano; Picado-Muiño, David; Denker, Michael; Borgelt, Christian; Grün, Sonja
2013-01-01
We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains. This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any superset (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible due to severe multiple testing. To overcome this issue, we proposed to test the significance not of individual patterns, but instead of their signatures, defined as the pairs of pattern size z and support c. Here, we derive in detail a statistical test for the significance of the signatures under the null hypothesis of full independence (pattern spectrum filtering, PSF) by means of surrogate data. As a result, injected spike patterns that mimic assembly activity are well detected, yielding a low false negative rate. However, this approach is prone to additionally classify patterns resulting from chance overlap of real assembly activity and background spiking as significant. These patterns represent false positives with respect to the null hypothesis of having one assembly of given signature embedded in otherwise independent spiking activity. We propose the additional method of pattern set reduction (PSR) to remove these false positives by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. PMID:24167487
de Hoyo, Moises; Gonzalo-Skok, Oliver; Sañudo, Borja; Carrascal, Claudio; Plaza-Armas, Jose R; Camacho-Candil, Fernando; Otero-Esquina, Carlos
2016-02-01
The aim of this study was to analyze the effects of 3 different low/moderate load strength training methods (full-back squat [SQ], resisted sprint with sled towing [RS], and plyometric and specific drills training [PLYO]) on sprinting, jumping, and change of direction (COD) abilities in soccer players. Thirty-two young elite male Spanish soccer players participated in the study. Subjects performed 2 specific strength training sessions per week, in addition to their normal training sessions for 8 weeks. The full-back squat protocol consisted of 2-3 sets × 4-8 repetitions at 40-60% 1 repetition maximum (∼ 1.28-0.98 m · s(-1)). The resisted sprint training was compounded by 6-10 sets × 20-m loaded sprints (12.6% of body mass). The plyometric and specific drills training was based on 1-3 sets × 2-3 repetitions of 8 plyometric and speed/agility exercises. Testing sessions included a countermovement jump (CMJ), a 20-m sprint (10-m split time), a 50-m (30-m split time) sprint, and COD test (i.e., Zig-Zag test). Substantial improvements (likely to almost certainly) in CMJ (effect size [ES]: 0.50-0.57) and 30-50 m (ES: 0.45-0.84) were found in every group in comparison to pretest results. Moreover, players in PLYO and SQ groups also showed substantial enhancements (likely to very likely) in 0-50 m (ES: 0.46-0.60). In addition, 10-20 m was also improved (very likely) in the SQ group (ES: 0.61). Between-group analyses showed that improvements in 10-20 m (ES: 0.57) and 30-50 m (ES: 0.40) were likely greater in the SQ group than in the RS group. Also, 10-20 m (ES: 0.49) was substantially better in the SQ group than in the PLYO group. In conclusion, the present strength training methods used in this study seem to be effective to improve jumping and sprinting abilities, but COD might need other stimulus to achieve positive effects.
2010-01-01
Background To determine whether training of providers participating in franchise clinic networks is associated with increased Family Planning service use among low-income urban families in Pakistan. Methods The study uses 2001 survey data consisting of interviews with 1113 clinical and non-clinical providers working in public and private hospitals/clinics. Data analysis excludes non-clinical providers reducing sample size to 822. Variables for the analysis are divided into client volume, and training in family planning. Regression models are used to compute the association between training and service use in franchise versus private non-franchise clinics. Results In franchise clinic networks, staff are 6.5 times more likely to receive family planning training (P = 0.00) relative to private non-franchises. Service use was significantly associated with training (P = 0.00), franchise affiliation (P = 0.01), providers' years of family planning experience (P = 0.02) and the number of trained staff working at government owned clinics (P = 0.00). In this setting, nurses are significantly less likely to receive training compared to doctors (P = 0.00). Conclusions These findings suggest that franchises recruit and train various cadres of health workers and training maybe associated with increased service use through improvement in quality of services. PMID:21062460
Qureshi, Asma M
2010-11-09
To determine whether training of providers participating in franchise clinic networks is associated with increased Family Planning service use among low-income urban families in Pakistan. The study uses 2001 survey data consisting of interviews with 1113 clinical and non-clinical providers working in public and private hospitals/clinics. Data analysis excludes non-clinical providers reducing sample size to 822. Variables for the analysis are divided into client volume, and training in family planning. Regression models are used to compute the association between training and service use in franchise versus private non-franchise clinics. In franchise clinic networks, staff are 6.5 times more likely to receive family planning training (P = 0.00) relative to private non-franchises. Service use was significantly associated with training (P = 0.00), franchise affiliation (P = 0.01), providers' years of family planning experience (P = 0.02) and the number of trained staff working at government owned clinics (P = 0.00). In this setting, nurses are significantly less likely to receive training compared to doctors (P = 0.00). These findings suggest that franchises recruit and train various cadres of health workers and training maybe associated with increased service use through improvement in quality of services.
Assessing and minimizing contamination in time of flight based validation data
NASA Astrophysics Data System (ADS)
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
Helmerhorst, Katrien O W; Riksen-Walraven, J Marianne A; Fukkink, Ruben G; Tavecchio, Louis W C; Gevers Deynoot-Schaub, Mirjam J J M
2017-01-01
Previous studies underscore the need to improve caregiver-child interactions in early child care centers. In this study we used a randomized controlled trial to examine whether a 5-week video feedback training can improve six key interactive skills of caregivers in early child care centers: Sensitive responsiveness, respect for autonomy, structuring and limit setting, verbal communication, developmental stimulation, and fostering positive peer interactions. A total of 139 caregivers from 68 early child care groups for 0- to 4-year-old children in Dutch child care centers participated in this RCT, 69 in the intervention condition and 70 in the control condition. Caregiver interactive skills during everyday interactions with the children were rated from videotape using the Caregiver Interaction Profile (CIP) scales at pretest, posttest, and follow-up 3 months after the posttest. Results at posttest indicate a significant positive training effect on all six caregiver interactive skills. Effect sizes of the CIP training range between d = 0.35 and d = 0.79. Three months after the posttest, caregivers in the intervention group still scored significantly higher on sensitive responsiveness, respect for autonomy, verbal communication, and fostering positive peer interactions than caregivers in the control group with effect sizes ranging between d = 0.47 and d = 0.70. This study shows that the quality of caregiver-child interactions can be improved for all six important caregiver skills, with a relatively short training program. Possible ways to further improve the training and to implement it in practice and education are discussed.
NASA Astrophysics Data System (ADS)
Jiménez del Toro, Oscar; Atzori, Manfredo; Otálora, Sebastian; Andersson, Mats; Eurén, Kristian; Hedlund, Martin; Rönnquist, Peter; Müller, Henning
2017-03-01
The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high-grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions-of-interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high vs. low Gleason grade. Grades 7-8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re-training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.
Tchanturia, Kate; Doris, Eli; Fleming, Caroline
2014-05-01
This study aims to evaluate a novel and brief skills-based therapy for inpatients with anorexia nervosa, which addressed 'cold' and 'hot' cognitions in group format. Adult inpatients with anorexia nervosa participated in the cognitive remediation and emotion skills training groups. Participants who attended all group sessions completed patient satisfaction and self-report questionnaires. Analysis of the data showed that social anhedonia (measured by the Revised Social Anhedonia Scale) decreased significantly between pre- and post-interventions, with small effect size (d=0.39). Motivation (perceived 'importance to change' and 'ability to change') was found to have increased with small effect sizes (d=0.23 and d=0.16), but these changes did not reach statistical significance. The cognitive remediation and emotion skills training group had positive feedback from both the patients and therapists delivering this structured intervention. Improved strategies are needed both in supporting inpatients to tolerate the group therapy setting and in helping them to develop the skills necessary for participation. Further larger-scale research in this area is needed to consolidate these findings. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association.
Carr, Alan; Hartnett, Dan; Brosnan, Eileen; Sharry, John
2017-09-01
Parents Plus (PP) programs are systemic, solution-focused, group-based interventions. They are designed for delivery in clinical and community settings as treatment programs for families with child-focused problems, such as behavioral difficulties, disruptive behavior disorders, and emotional disorders in young people with and without developmental disabilities. PP programs have been developed for families of preschoolers, preadolescent children, and teenagers, as well as for separated or divorced families. Seventeen evaluation studies involving over 1,000 families have shown that PP programs have a significant impact on child behavior problems, goal attainment, and parental satisfaction and stress. The effect size of 0.57 (p < .001) from a meta-analysis of 10 controlled studies for child behavior problems compares favorably with those of meta-analyses of other well-established parent training programs with large evidence bases. In controlled studies, PP programs yielded significant (p < .001) effect sizes for goal attainment (d = 1.51), parental satisfaction (d = 0.78), and parental stress reduction (d = 0.54). PP programs may be facilitated by trained front-line mental health and educational professionals. © 2016 Family Process Institute.
State of the evidence on simulation-based training for laparoscopic surgery: a systematic review.
Zendejas, Benjamin; Brydges, Ryan; Hamstra, Stanley J; Cook, David A
2013-04-01
Summarize the outcomes and best practices of simulation training for laparoscopic surgery. Simulation-based training for laparoscopic surgery has become a mainstay of surgical training. Much new evidence has accrued since previous reviews were published. We systematically searched the literature through May 2011 for studies evaluating simulation, in comparison with no intervention or an alternate training activity, for training health professionals in laparoscopic surgery. Outcomes were classified as satisfaction, skills (in a test setting) of time (to perform the task), process (eg, performance rating), product (eg, knot strength), and behaviors when caring for patients. We used random effects to pool effect sizes. From 10,903 articles screened, we identified 219 eligible studies enrolling 7138 trainees, including 91 (42%) randomized trials. For comparisons with no intervention (n = 151 studies), pooled effect size (ES) favored simulation for outcomes of knowledge (1.18; N = 9 studies), skills time (1.13; N = 89), skills process (1.23; N = 114), skills product (1.09; N = 7), behavior time (1.15; N = 7), behavior process (1.22; N = 15), and patient effects (1.28; N = 1), all P < 0.05. When compared with nonsimulation instruction (n = 3 studies), results significantly favored simulation for outcomes of skills time (ES, 0.75) and skills process (ES, 0.54). Comparisons between different simulation interventions (n = 79 studies) clarified best practices. For example, in comparison with virtual reality, box trainers have similar effects for process skills outcomes and seem to be superior for outcomes of satisfaction and skills time. Simulation-based laparoscopic surgery training of health professionals has large benefits when compared with no intervention and is moderately more effective than nonsimulation instruction.
Cycle training induces muscle hypertrophy and strength gain: strategies and mechanisms.
Ozaki, Hayao; Loenneke, J P; Thiebaud, R S; Abe, T
2015-03-01
Cycle training is widely performed as a major part of any exercise program seeking to improve aerobic capacity and cardiovascular health. However, the effect of cycle training on muscle size and strength gain still requires further insight, even though it is known that professional cyclists display larger muscle size compared to controls. Therefore, the purpose of this review is to discuss the effects of cycle training on muscle size and strength of the lower extremity and the possible mechanisms for increasing muscle size with cycle training. It is plausible that cycle training requires a longer period to significantly increase muscle size compared to typical resistance training due to a much slower hypertrophy rate. Cycle training induces muscle hypertrophy similarly between young and older age groups, while strength gain seems to favor older adults, which suggests that the probability for improving in muscle quality appears to be higher in older adults compared to young adults. For young adults, higher-intensity intermittent cycling may be required to achieve strength gains. It also appears that muscle hypertrophy induced by cycle training results from the positive changes in muscle protein net balance.
Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner
2013-01-01
Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.
NASA Astrophysics Data System (ADS)
Pereira, Carina; Dighe, Manjiri; Alessio, Adam M.
2018-02-01
Various Computer Aided Diagnosis (CAD) systems have been developed that characterize thyroid nodules using the features extracted from the B-mode ultrasound images and Shear Wave Elastography images (SWE). These features, however, are not perfect predictors of malignancy. In other domains, deep learning techniques such as Convolutional Neural Networks (CNNs) have outperformed conventional feature extraction based machine learning approaches. In general, fully trained CNNs require substantial volumes of data, motivating several efforts to use transfer learning with pre-trained CNNs. In this context, we sought to compare the performance of conventional feature extraction, fully trained CNNs, and transfer learning based, pre-trained CNNs for the detection of thyroid malignancy from ultrasound images. We compared these approaches applied to a data set of 964 B-mode and SWE images from 165 patients. The data were divided into 80% training/validation and 20% testing data. The highest accuracies achieved on the testing data for the conventional feature extraction, fully trained CNN, and pre-trained CNN were 0.80, 0.75, and 0.83 respectively. In this application, classification using a pre-trained network yielded the best performance, potentially due to the relatively limited sample size and sub-optimal architecture for the fully trained CNN.
Limited Effects of Set Shifting Training in Healthy Older Adults
Grönholm-Nyman, Petra; Soveri, Anna; Rinne, Juha O.; Ek, Emilia; Nyholm, Alexandra; Stigsdotter Neely, Anna; Laine, Matti
2017-01-01
Our ability to flexibly shift between tasks or task sets declines in older age. As this decline may have adverse effects on everyday life of elderly people, it is of interest to study whether set shifting ability can be trained, and if training effects generalize to other cognitive tasks. Here, we report a randomized controlled trial where healthy older adults trained set shifting with three different set shifting tasks. The training group (n = 17) performed adaptive set shifting training for 5 weeks with three training sessions a week (45 min/session), while the active control group (n = 16) played three different computer games for the same period. Both groups underwent extensive pre- and post-testing and a 1-year follow-up. Compared to the controls, the training group showed significant improvements on the trained tasks. Evidence for near transfer in the training group was very limited, as it was seen only on overall accuracy on an untrained computerized set shifting task. No far transfer to other cognitive functions was observed. One year later, the training group was still better on the trained tasks but the single near transfer effect had vanished. The results suggest that computerized set shifting training in the elderly shows long-lasting effects on the trained tasks but very little benefit in terms of generalization. PMID:28386226
Principal component analysis for designed experiments.
Konishi, Tomokazu
2015-01-01
Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the principal axes. Together, these introduced options result in improved generality and objectivity of the analytical results. The methodology has thus become more like a set of multiple regression analyses that find independent models that specify each of the axes.
Ozaki, Hayao; Kitada, Tomoharu; Nakagata, Takashi; Naito, Hisashi
2017-05-01
Here, we aimed to compare the effect of a combination of body mass-based resistance exercise and moderate-intensity (55% peak oxygen uptake [ V˙O 2 peak]) walking or high-intensity (75% V˙O 2 peak) walking on muscle size and V˙O 2 peak in untrained older women. A total of 12 untrained older women (mean age 60 ± 2 years) were randomly assigned to either a moderate-intensity aerobic training group (n = 6) or high-intensity aerobic training group (n = 6). Both groups carried out body-mass based (lower body) resistance exercises (2 sets of 10 repetitions) on 3 days/week for 8 weeks. Between these exercises, the participants in the moderate-intensity aerobic training group walked at a previously determined speed equivalent to 55% V˙O 2 peak, whereas those in the high-intensity aerobic training group walked at a speed equivalent to 75% V˙O 2 peak. Muscle thickness of the anterior aspect of the thigh and maximal isokinetic knee extension strength significantly increased in both groups (P < 0.01); these relative changes were negatively correlated with the absolute muscle thickness of the anterior aspect of the thigh value and the relative value of maximal knee strength to body mass at pre-intervention, respectively. A significant group × time interaction was noted for V˙O 2 peak (P < 0.05), which increased only in the high-intensity aerobic training group. Body mass-based resistance training significantly induced muscle hypertrophy in untrained older women. In particular, lower muscle thickness before intervention was associated with greater training-induced growth. Furthermore, V˙O 2 peak can be increased by combined circuit training involving low-load resistance exercise and walking, particularly when a relatively high intensity of walking is maintained. Geriatr Gerontol Int 2017; 17: 779-784. © 2016 Japan Geriatrics Society.
Autogenic training to reduce anxiety in nursing students: randomized controlled trial.
Kanji, Nasim; White, Adrian; Ernst, Edzard
2006-03-01
This paper reports a study to determine the effectiveness of autogenic training in reducing anxiety in nursing students. Nursing is stressful, and nursing students also have the additional pressures and uncertainties shared with all academic students. Autogenic training is a relaxation technique consisting of six mental exercises and is aimed at relieving tension, anger and stress. Meta-analysis has found large effect sizes for autogenic trainings intervention comparisons, medium effect sizes against control groups, and no effects when compared with other psychological therapies. A controlled trial with 50 nursing students found that the number of certified days off sick was reduced by autogenic training compared with no treatment, and a second trial with only 18 students reported greater improvement in Trait Anxiety, but not State Anxiety, compared with untreated controls. A randomized controlled trial with three parallel arms was completed in 1998 with 93 nursing students aged 19-49 years. The setting was a university college in the United Kingdom. The treatment group received eight weekly sessions of autogenic training, the attention control group received eight weekly sessions of laughter therapy, and the time control group received no intervention. The outcome measures were the State-Trait Anxiety Inventory, the Maslach Burnout Inventory, blood pressure and pulse rate completed at baseline, 2 months (end of treatment), and 5, 8, and 11 months from randomization. There was a statistically significantly greater reduction of State (P<0.001) and Trait (P<0.001) Anxiety in the autogenic training group than in both other groups immediately after treatment. There were no differences between the groups for the Maslach Burnout Inventory. The autogenic training group also showed statistically significantly greater reduction immediately after treatment in systolic (P<0.01) and diastolic (P<0.05) blood pressure, and pulse rate (P<0.002), than the other two groups. CONCLUSION. Autogenic training has at least a short-term effect in alleviating stress in nursing students.
Training forward surgical teams for deployment: the US Army Trauma Training Center.
Valdiri, Linda A; Andrews-Arce, Virginia E; Seery, Jason M
2015-04-01
Since the late 1980s, the US Army has been deploying forward surgical teams to the most intense areas of conflict to care for personnel injured in combat. The forward surgical team is a 20-person medical team that is highly mobile, extremely agile, and has relatively little need of outside support to perform its surgical mission. In order to perform this mission, however, team training and trauma training are required. The large majority of these teams do not routinely train together to provide patient care, and that training currently takes place at the US Army Trauma Training Center (ATTC). The training staff of the ATTC is a specially selected 10-person team made up of active duty personnel from the Army Medical Department assigned to the University of Miami/Jackson Memorial Hospital Ryder Trauma Center in Miami, Florida. The ATTC team of instructors trains as many as 11 forward surgical teams in 2-week rotations per year so that the teams are ready to perform their mission in a deployed setting. Since the first forward surgical team was trained at the ATTC in January 2002, more than 112 forward surgical teams and other similar-sized Department of Defense forward resuscitative and surgical units have rotated through trauma training at the Ryder Trauma Center in preparation for deployment overseas. ©2015 American Association of Critical-Care Nurses.
SkData: data sets and algorithm evaluation protocols in Python
NASA Astrophysics Data System (ADS)
Bergstra, James; Pinto, Nicolas; Cox, David D.
2015-01-01
Machine learning benchmark data sets come in all shapes and sizes, whereas classification algorithms assume sanitized input, such as (x, y) pairs with vector-valued input x and integer class label y. Researchers and practitioners know all too well how tedious it can be to get from the URL of a new data set to a NumPy ndarray suitable for e.g. pandas or sklearn. The SkData library handles that work for a growing number of benchmark data sets (small and large) so that one-off in-house scripts for downloading and parsing data sets can be replaced with library code that is reliable, community-tested, and documented. The SkData library also introduces an open-ended formalization of training and testing protocols that facilitates direct comparison with published research. This paper describes the usage and architecture of the SkData library.
Team-training in healthcare: a narrative synthesis of the literature.
Weaver, Sallie J; Dy, Sydney M; Rosen, Michael A
2014-05-01
Patients are safer and receive higher quality care when providers work as a highly effective team. Investment in optimising healthcare teamwork has swelled in the last 10 years. Consequently, evidence regarding the effectiveness for these interventions has also grown rapidly. We provide an updated review concerning the current state of team-training science and practice in acute care settings. A PubMed search for review articles examining team-training interventions in acute care settings published between 2000 and 2012 was conducted. Following identification of relevant reviews with searches terminating in 2008 and 2010, PubMed and PSNet were searched for additional primary studies published in 2011 and 2012. Primary outcomes included patient outcomes and quality indices. Secondary outcomes included teamwork behaviours, knowledge and attitudes. Both simulation and classroom-based team-training interventions can improve teamwork processes (eg, communication, coordination and cooperation), and implementation has been associated with improvements in patient safety outcomes. Thirteen studies published between 2011 and 2012 reported statistically significant changes in teamwork behaviours, processes or emergent states and 10 reported significant improvement in clinical care processes or patient outcomes, including mortality and morbidity. Effects were reported across a range of clinical contexts. Larger effect sizes were reported for bundled team-training interventions that included tools and organisational changes to support sustainment and transfer of teamwork competencies into daily practice. Overall, moderate-to-high-quality evidence suggests team-training can positively impact healthcare team processes and patient outcomes. Additionally, toolkits are available to support intervention development and implementation. Evidence suggests bundled team-training interventions and implementation strategies that embed effective teamwork as a foundation for other improvement efforts may offer greatest impact on patient outcomes.
Team-training in healthcare: a narrative synthesis of the literature
Weaver, Sallie J; Dy, Sydney M; Rosen, Michael A
2014-01-01
Background Patients are safer and receive higher quality care when providers work as a highly effective team. Investment in optimising healthcare teamwork has swelled in the last 10 years. Consequently, evidence regarding the effectiveness for these interventions has also grown rapidly. We provide an updated review concerning the current state of team-training science and practice in acute care settings. Methods A PubMed search for review articles examining team-training interventions in acute care settings published between 2000 and 2012 was conducted. Following identification of relevant reviews with searches terminating in 2008 and 2010, PubMed and PSNet were searched for additional primary studies published in 2011 and 2012. Primary outcomes included patient outcomes and quality indices. Secondary outcomes included teamwork behaviours, knowledge and attitudes. Results Both simulation and classroom-based team-training interventions can improve teamwork processes (eg, communication, coordination and cooperation), and implementation has been associated with improvements in patient safety outcomes. Thirteen studies published between 2011 and 2012 reported statistically significant changes in teamwork behaviours, processes or emergent states and 10 reported significant improvement in clinical care processes or patient outcomes, including mortality and morbidity. Effects were reported across a range of clinical contexts. Larger effect sizes were reported for bundled team-training interventions that included tools and organisational changes to support sustainment and transfer of teamwork competencies into daily practice. Conclusions Overall, moderate-to-high-quality evidence suggests team-training can positively impact healthcare team processes and patient outcomes. Additionally, toolkits are available to support intervention development and implementation. Evidence suggests bundled team-training interventions and implementation strategies that embed effective teamwork as a foundation for other improvement efforts may offer greatest impact on patient outcomes. PMID:24501181
Learning relevant features of data with multi-scale tensor networks
NASA Astrophysics Data System (ADS)
Miles Stoudenmire, E.
2018-07-01
Inspired by coarse-graining approaches used in physics, we show how similar algorithms can be adapted for data. The resulting algorithms are based on layered tree tensor networks and scale linearly with both the dimension of the input and the training set size. Computing most of the layers with an unsupervised algorithm, then optimizing just the top layer for supervised classification of the MNIST and fashion MNIST data sets gives very good results. We also discuss mixing a prior guess for supervised weights together with an unsupervised representation of the data, yielding a smaller number of features nevertheless able to give good performance.
Cho, Youngsuk; Je, Sangmo; Yoon, Yoo Sang; Roh, Hye Rin; Chang, Chulho; Kang, Hyunggoo; Lim, Taeho
2016-07-04
Students are largely providing feedback to one another when instructor facilitates peer feedback rather than teaching in group training. The number of students in a group affect the learning of students in the group training. We aimed to investigate whether a larger group size increases students' test scores on a post-training test with peer feedback facilitated by instructor after video-guided basic life support (BLS) refresher training. Students' one-rescuer adult BLS skills were assessed by a 2-min checklist-based test 1 year after the initial training. A cluster randomized controlled trial was conducted to evaluate the effect of student number in a group on BLS refresher training. Participants included 115 final-year medical students undergoing their emergency medicine clerkship. The median number of students was 8 in the large groups and 4 in the standard group. The primary outcome was to examine group differences in post-training test scores after video-guided BLS training. Secondary outcomes included the feedback time, number of feedback topics, and results of end-of-training evaluation questionnaires. Scores on the post-training test increased over three consecutive tests with instructor-led peer feedback, but not differ between large and standard groups. The feedback time was longer and number of feedback topics generated by students were higher in standard groups compared to large groups on the first and second tests. The end-of-training questionnaire revealed that the students in large groups preferred the smaller group size compared to their actual group size. In this BLS refresher training, the instructor-led group feedback increased the test score after tutorial video-guided BLS learning, irrespective of the group size. A smaller group size allowed more participations in peer feedback.
Effects of Goal Setting on Performance and Job Satisfaction
ERIC Educational Resources Information Center
Ivancevich, John M.
1976-01-01
Studied the effect of goal-setting training on the performance and job satisfaction of sales personnel. One group was trained in participative goal setting; one group was trained in assigned goal setting; and one group received no training. Both trained groups showed temporary improvements in performance and job satisfaction. For availability see…
The Müller-Lyer Illusion in a Computational Model of Biological Object Recognition
Zeman, Astrid; Obst, Oliver; Brooks, Kevin R.; Rich, Anina N.
2013-01-01
Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI) is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the MLI in humans. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. A post-hoc analysis of response weights within a representative trained network ruled out the possibility that the illusion is caused by a reliance on information at low spatial frequencies. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections. PMID:23457510
A deep learning model observer for use in alterative forced choice virtual clinical trials
NASA Astrophysics Data System (ADS)
Alnowami, M.; Mills, G.; Awis, M.; Elangovanr, P.; Patel, M.; Halling-Brown, M.; Young, K. C.; Dance, D. R.; Wells, K.
2018-03-01
Virtual clinical trials (VCTs) represent an alternative assessment paradigm that overcomes issues of dose, high cost and delay encountered in conventional clinical trials for breast cancer screening. However, to fully utilize the potential benefits of VCTs requires a machine-based observer that can rapidly and realistically process large numbers of experimental conditions. To address this, a Deep Learning Model Observer (DLMO) was developed and trained to identify lesion targets from normal tissue in small (200 x 200 pixel) image segments, as used in Alternative Forced Choice (AFC) studies. The proposed network consists of 5 convolutional layers with 2x2 kernels and ReLU (Rectified Linear Unit) activations, followed by max pooling with size equal to the size of the final feature maps and three dense layers. The class outputs weights from the final fully connected dense layer are used to consider sets of n images in an n-AFC paradigm to determine the image most likely to contain a target. To examine the DLMO performance on clinical data, a training set of 2814 normal and 2814 biopsy-confirmed malignant mass targets were used. This produced a sensitivity of 0.90 and a specificity of 0.92 when presented with a test data set of 800 previously unseen clinical images. To examine the DLMOs minimum detectable contrast, a second dataset of 630 simulated backgrounds and 630 images with simulated lesion and spherical targets (4mm and 6mm diameter), produced contrast thresholds equivalent to/better than human observer performance for spherical targets, and comparable (12 % difference) for lesion targets.
Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo
2017-01-01
Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024 pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314
NASA Astrophysics Data System (ADS)
Grudinin, Sergei; Kadukova, Maria; Eisenbarth, Andreas; Marillet, Simon; Cazals, Frédéric
2016-09-01
The 2015 D3R Grand Challenge provided an opportunity to test our new model for the binding free energy of small molecules, as well as to assess our protocol to predict binding poses for protein-ligand complexes. Our pose predictions were ranked 3-9 for the HSP90 dataset, depending on the assessment metric. For the MAP4K dataset the ranks are very dispersed and equal to 2-35, depending on the assessment metric, which does not provide any insight into the accuracy of the method. The main success of our pose prediction protocol was the re-scoring stage using the recently developed Convex-PL potential. We make a thorough analysis of our docking predictions made with AutoDock Vina and discuss the effect of the choice of rigid receptor templates, the number of flexible residues in the binding pocket, the binding pocket size, and the benefits of re-scoring. However, the main challenge was to predict experimentally determined binding affinities for two blind test sets. Our affinity prediction model consisted of two terms, a pairwise-additive enthalpy, and a non pairwise-additive entropy. We trained the free parameters of the model with a regularized regression using affinity and structural data from the PDBBind database. Our model performed very well on the training set, however, failed on the two test sets. We explain the drawback and pitfalls of our model, in particular in terms of relative coverage of the test set by the training set and missed dynamical properties from crystal structures, and discuss different routes to improve it.
Word game bingo: a behavioral treatment package for improving textual responding to sight words.
Kirby, K C; Holborn, S W; Bushby, H T
1981-01-01
Six third-grade students identified as deficient in reading skills tested the efficacy of word game bingo for acquisition and retention of sight word reading. The design was a modified multiple baseline in which treatment was implemented over 3 of 4 word sets and terminated on earlier sets when commencing treatment on later sets. Four sets of bingo cards were constructed on 7 X 9 cm paper divided into 25 equal-sized boxes. Sight words of each set were randomly placed into 24 of these boxes (the center box was marked "free"). Bingo winners were given tokens which were traded weekly for reinforcing activities. Noticeable improvements occurred for the word sets receiving the game treatment (sets A to C). Mean percentage points of improvement from baseline to treatment were approximately 30%. Terminal levels of correct responding exceeded 90%. Several variations of the game were suggested for future research and word game bingo was advocated as an effective behavioral technique or teachers to train sight word reading. PMID:7298541
The double stigma of obesity and serious mental illnesses: promoting health and recovery.
Mizock, Lauren
2012-12-01
This article contrasts the traditional medical approach and size acceptance perspectives on obesity among people with serious mental illnesses. Higher incidences of obesity among populations with serious mental illnesses have been identified. In response, a recent initiative in mental health has urged providers to address the obesity rates among populations with mental illnesses by monitoring weight, prescribing weight loss medication, and recommending bariatric surgery. However, literature is emerging with regards to the double stigma experienced by individuals with obesity and a mental illness. Therefore, the traditional focus on weight loss can benefit from a size acceptance approach to focus on health promotion and avoid stigmatizing size. Citations of theoretical and behavioral health literature on the experiences of individuals with mental illnesses and obesity are presented. Recommendations for interventions, training, and future research related to obesity and mental illnesses are provided. Implications are suggested for a size acceptance approach to interventions for individuals in recovery from mental illnesses to promote health at every size within mental health and medical settings.
Study of CT image texture using deep learning techniques
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Fan, Jiahua; Chevalier, David
2018-03-01
For CT imaging, reduction of radiation dose while improving or maintaining image quality (IQ) is currently a very active research and development topic. Iterative Reconstruction (IR) approaches have been suggested to be able to offer better IQ to dose ratio compared to the conventional Filtered Back Projection (FBP) reconstruction. However, it has been widely reported that often CT image texture from IR is different compared to that from FBP. Researchers have proposed different figure of metrics to quantitate the texture from different reconstruction methods. But there is still a lack of practical and robust method in the field for texture description. This work applied deep learning method for CT image texture study. Multiple dose scans of a 20cm diameter cylindrical water phantom was performed on Revolution CT scanner (GE Healthcare, Waukesha) and the images were reconstructed with FBP and four different IR reconstruction settings. The training images generated were randomly allotted (80:20) to a training and validation set. An independent test set of 256-512 images/class were collected with the same scan and reconstruction settings. Multiple deep learning (DL) networks with Convolution, RELU activation, max-pooling, fully-connected, global average pooling and softmax activation layers were investigated. Impact of different image patch size for training was investigated. Original pixel data as well as normalized image data were evaluated. DL models were reliably able to classify CT image texture with accuracy up to 99%. Results show that the deep learning techniques suggest that CT IR techniques may help lower the radiation dose compared to FBP.
Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L
2018-06-07
We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.
Lee, David; Park, Sang-Hoon; Lee, Sang-Goog
2017-10-07
In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.
Issues in development, evaluation, and use of the NASA Preflight Adaptation Trainer (PAT)
NASA Technical Reports Server (NTRS)
Lane, Norman E.; Kennedy, Robert S.
1988-01-01
The Preflight Adaptation Trainer (PAT) is intended to reduce or alleviate space adaptation syndrome by providing opportunities for portions of that adaptation to occur under normal gravity conditions prior to space flight. Since the adaptation aspects of the PAT objectives involve modification not only of the behavior of the trainee, but also of sensiomotor skills which underly the behavioral generation, the defining of training objectives of the PAT utilizes four mechanisms: familiarization, demonstration, training and adaptation. These mechanisms serve as structural reference points for evaluation, drive the content and organization of the training procedures, and help to define the roles of the PAT instructors and operators. It was determined that three psychomotor properties are most critical for PAT evaluation: reliability; sensitivity; and relevance. It is cause for concern that the number of measures available to examine PAT effects exceed those that can be properly studied with the available sample sizes; special attention will be required in selection of the candidate measure set. The issues in PAT use and application within a training system context are addressed through linking the three training related mechanisms of familiarization, demonstration and training to the fourth mechanism, adaptation.
Burcal, Christopher J; Trier, Alejandra Y; Wikstrom, Erik A
2017-09-01
Both balance training and selected interventions meant to target sensory structures (STARS) have been shown to be effective at restoring deficits associated with chronic ankle instability (CAI). Clinicians often use multiple treatment modalities in patients with CAI. However, evidence for combined intervention effectiveness in CAI patients remains limited. To determine if augmenting a balance-training protocol with STARS (BTS) results in greater improvements than balance training (BT) alone in those with CAI. Randomized-controlled trial. Research laboratory. 24 CAI participants (age 21.3 ± 2.0 y; height 169.8 ± 12.9 cm; mass 72.5 ± 22.2 kg) were randomized into 2 groups: BT and BTS. Participants completed a 4-week progression-based balance-training protocol consisting of 3 20-min sessions per week. The experimental group also received a 5-min set of STARS treatments consisting of calf stretching, plantar massage, ankle joint mobilizations, and ankle joint traction before each balance-training session. Outcomes included self-assessed disability, Star Excursion Balance Test reach distance, and time-to-boundary calculated from static balance trials. All outcomes were assessed before, and 24-hours and 1-week after protocol completion. Self-assessed disability was also captured 1-month after the intervention. No significant group differences were identified (P > .10). Both groups demonstrated improvements in all outcome categories after the interventions (P < .10), many of which were retained at 1-week posttest (P < .10). Although 90% CIs include zero, effect sizes favor BTS. Similarly, only the BTS group exceeded the minimal detectable change for time-to-boundary outcomes. While statistically no more effective, exceeding minimal detectable change scores and favorable effect sizes suggest that a 4-week progressive BTS program may be more effective at improving self-assessed disability and postural control in CAI patients than balance training in isolation.
Latimer-Cheung, Amy E; Arbour-Nicitopoulos, Kelly P; Brawley, Lawrence R; Gray, Casey; Justine Wilson, A; Prapavessis, Harry; Tomasone, Jennifer R; Wolfe, Dalton L; Martin Ginis, Kathleen A
2013-08-01
The majority of people with spinal cord injury (SCI) do not engage in sufficient leisure-time physical activity (LTPA) to attain fitness benefits; however, many have good intentions to be active. This paper describes two pilot interventions targeting people with SCI who are insufficiently active but intend to be active (i.e., "intenders"). Study 1 examined the effects of a single, telephone-based counseling session on self-regulatory efficacy, intentions, and action plans for LTPA among seven men and women with paraplegia or tetraplegia. Study 2 examined the effects of a home-based strength-training session, delivered by a peer and a fitness trainer, on strength-training task self-efficacy, intentions, action plans, and behavior. Participants were 11 men and women with paraplegia. The counseling session (Study 1) yielded medium- to large-sized increases in participants' confidence to set LTPA goals and intentions to be active. The home visit (Study 2) produced medium- to large-sized increases in task self-efficacy, barrier self-efficacy, intentions, action planning, and strength-training behavior from baseline to 4 weeks after the visit. Study 1 findings provide preliminary evidence that a single counseling session can impact key determinants of LTPA among intenders with SCI. Study 2 findings demonstrate the potential utility of a peer-mediated, home-based strength training session for positively influencing social cognitions and strength-training behavior. Together, these studies provide evidence and resources for intervention strategies to promote LTPA among intenders with SCI, a population for whom LTPA interventions and resources are scarcely available.
NASA Astrophysics Data System (ADS)
Buck, J. A.; Underhill, P. R.; Morelli, J.; Krause, T. W.
2017-02-01
Degradation of nuclear steam generator (SG) tubes and support structures can result in a loss of reactor efficiency. Regular in-service inspection, by conventional eddy current testing (ECT), permits detection of cracks, measurement of wall loss, and identification of other SG tube degradation modes. However, ECT is challenged by overlapping degradation modes such as might occur for SG tube fretting accompanied by tube off-set within a corroding ferromagnetic support structure. Pulsed eddy current (PEC) is an emerging technology examined here for inspection of Alloy-800 SG tubes and associated carbon steel drilled support structures. Support structure hole size was varied to simulate uniform corrosion, while SG tube was off-set relative to hole axis. PEC measurements were performed using a single driver with an 8 pick-up coil configuration in the presence of flat-bottom rectangular frets as an overlapping degradation mode. A modified principal component analysis (MPCA) was performed on the time-voltage data in order to reduce data dimensionality. The MPCA scores were then used to train a support vector machine (SVM) that simultaneously targeted four independent parameters associated with; support structure hole size, tube off-centering in two dimensions and fret depth. The support vector machine was trained, tested, and validated on experimental data. Results were compared with a previously developed artificial neural network (ANN) trained on the same data. Estimates of tube position showed comparable results between the two machine learning tools. However, the ANN produced better estimates of hole inner diameter and fret depth. The better results from ANN analysis was attributed to challenges associated with the SVM when non-constant variance is present in the data.
Seitz, Laurent B; Reyes, Alvaro; Tran, Tai T; Saez de Villarreal, Eduardo; Haff, G Gregory
2014-12-01
Although lower-body strength is correlated with sprint performance, whether increases in lower-body strength transfer positively to sprint performance remain unclear. This meta-analysis determined whether increases in lower-body strength (measured with the free-weight back squat exercise) transfer positively to sprint performance, and identified the effects of various subject characteristics and resistance-training variables on the magnitude of sprint improvement. A computerized search was conducted in ADONIS, ERIC, SPORTDiscus, EBSCOhost, Google Scholar, MEDLINE and PubMed databases, and references of original studies and reviews were searched for further relevant studies. The analysis comprised 510 subjects and 85 effect sizes (ESs), nested with 26 experimental and 11 control groups and 15 studies. There is a transfer between increases in lower-body strength and sprint performance as indicated by a very large significant correlation (r = -0.77; p = 0.0001) between squat strength ES and sprint ES. Additionally, the magnitude of sprint improvement is affected by the level of practice (p = 0.03) and body mass (r = 0.35; p = 0.011) of the subject, the frequency of resistance-training sessions per week (r = 0.50; p = 0.001) and the rest interval between sets of resistance-training exercises (r = -0.47; p ≤ 0.001). Conversely, the magnitude of sprint improvement is not affected by the athlete's age (p = 0.86) and height (p = 0.08), the resistance-training methods used through the training intervention, (p = 0.06), average load intensity [% of 1 repetition maximum (RM)] used during the resistance-training sessions (p = 0.34), training program duration (p = 0.16), number of exercises per session (p = 0.16), number of sets per exercise (p = 0.06) and number of repetitions per set (p = 0.48). Increases in lower-body strength transfer positively to sprint performance. The magnitude of sprint improvement is affected by numerous subject characteristics and resistance-training variables, but the large difference in number of ESs available should be taken into consideration. Overall, the reported improvement in sprint performance (sprint ES = -0.87, mean sprint improvement = 3.11 %) resulting from resistance training is of practical relevance for coaches and athletes in sport activities requiring high levels of speed.
Automated 3D Phenotype Analysis Using Data Mining
Plyusnin, Ilya; Evans, Alistair R.; Karme, Aleksis; Gionis, Aristides; Jernvall, Jukka
2008-01-01
The ability to analyze and classify three-dimensional (3D) biological morphology has lagged behind the analysis of other biological data types such as gene sequences. Here, we introduce the techniques of data mining to the study of 3D biological shapes to bring the analyses of phenomes closer to the efficiency of studying genomes. We compiled five training sets of highly variable morphologies of mammalian teeth from the MorphoBrowser database. Samples were labeled either by dietary class or by conventional dental types (e.g. carnassial, selenodont). We automatically extracted a multitude of topological attributes using Geographic Information Systems (GIS)-like procedures that were then used in several combinations of feature selection schemes and probabilistic classification models to build and optimize classifiers for predicting the labels of the training sets. In terms of classification accuracy, computational time and size of the feature sets used, non-repeated best-first search combined with 1-nearest neighbor classifier was the best approach. However, several other classification models combined with the same searching scheme proved practical. The current study represents a first step in the automatic analysis of 3D phenotypes, which will be increasingly valuable with the future increase in 3D morphology and phenomics databases. PMID:18320060
Automated human skull landmarking with 2D Gabor wavelets
NASA Astrophysics Data System (ADS)
de Jong, Markus A.; Gül, Atilla; de Gijt, Jan Pieter; Koudstaal, Maarten J.; Kayser, Manfred; Wolvius, Eppo B.; Böhringer, Stefan
2018-05-01
Landmarking of CT scans is an important step in the alignment of skulls that is key in surgery planning, pre-/post-surgery comparisons, and morphometric studies. We present a novel method for automatically locating anatomical landmarks on the surface of cone beam CT-based image models of human skulls using 2D Gabor wavelets and ensemble learning. The algorithm is validated via human inter- and intra-rater comparisons on a set of 39 scans and a skull superimposition experiment with an established surgery planning software (Maxilim). Automatic landmarking results in an accuracy of 1–2 mm for a subset of landmarks around the nose area as compared to a gold standard derived from human raters. These landmarks are located in eye sockets and lower jaw, which is competitive with or surpasses inter-rater variability. The well-performing landmark subsets allow for the automation of skull superimposition in clinical applications. Our approach delivers accurate results, has modest training requirements (training set size of 30–40 items) and is generic, so that landmark sets can be easily expanded or modified to accommodate shifting landmark interests, which are important requirements for the landmarking of larger cohorts.
Functional polymorphisms associated with human muscle size and strength.
Thompson, Paul D; Moyna, Niall; Seip, Richard; Price, Thomas; Clarkson, Priscilla; Angelopoulos, Theodore; Gordon, Paul; Pescatello, Linda; Visich, Paul; Zoeller, Robert; Devaney, Joseph M; Gordish, Heather; Bilbie, Stephen; Hoffman, Eric P
2004-07-01
Skeletal muscle is critically important to human performance and health, but little is known of the genetic factors influencing muscle size, strength, and its response to exercise training. The Functional single nucleotide polymorphisms (SNP) Associated with Muscle Size and Strength, or FAMuSS, Study is a multicenter, NIH-funded program to examine the influence of gene polymorphisms on skeletal muscle size and strength before and after resistance exercise training. One thousand men and women, age 18 - 40 yr, will train their nondominant arm for 12 wk. Skeletal muscle size (magnetic resonance imaging) and isometric and dynamic strength will be measured before and after training. Individuals whose baseline values or response to training deviate > or = 1.5 SD will be defined as outliers and examined for genetic variants. Initially candidate genes previously associated with muscle performance will be examined, but the study will ultimately attempt to identify genes associated with muscle performance. FAMuSS should help identify genetic factors associated with muscle performance and the response to exercise training. Such insight should contribute to our ability to predict the individual response to exercise training but may also contribute to understanding better muscle physiology, to identifying individuals who are susceptible to muscle loss with environmental challenge, and to developing pharmacologic agents capable of preserving muscle size and function.
Statistical technique for analysing functional connectivity of multiple spike trains.
Masud, Mohammad Shahed; Borisyuk, Roman
2011-03-15
A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Lori; Goetchius, Elizabeth; Crowell, Brent; Hackney, Kyle; Wickwire, Jason; Ploutz-Snyder, Robert; Snyder, Scott
2012-01-01
Background: Known incompatibilities exist between resistance and aerobic training. Of particular importance are findings that concurrent resistance and aerobic training reduces the effectiveness of the resistance training and limits skeletal muscle adaptations (example: Dudley & Djamil, 1985). Numerous unloading studies have documented the effectiveness of resistance training alone for the maintenance of skeletal muscle size and strength. However the practical applications of those studies are limited because long ]duration crew members perform both aerobic and resistance exercise throughout missions/spaceflight. To date, such integrated training on the International Space Station (ISS) has not been fully effective in the maintenance of skeletal muscle function. Purpose: The purpose of this study was to evaluate the efficacy of high intensity concurrent resistance and aerobic training for the maintenance of cardiovascular fitness and skeletal muscle strength, power and endurance over 14 days of strict bed rest. Methods: 9 subjects (8 male and 1 female; 34.5 +/- 8.2 years) underwent 14 days of bed rest with concurrent training. Resistance and aerobic training were integrated as shown in table 1. Days that included 2 exercise sessions had a 4-8 hour rest between exercise bouts. The resistance training consisted of 3 sets of 12 repetitions of squat, heel raise, leg press and hamstring curl exercise. Aerobic exercise consisted of periodized interval training that included 30 sec, 2 min and 4 min intervals alternating by day with continuous aerobic exercise.
ERIC Educational Resources Information Center
Haverland, Edgar M.
The report describes a project designed to facilitate the transfer and utilization of training technology by developing a model for evaluating training approaches or innovtions in relation to the requirements, resources, and constraints of specific training settings. The model consists of two parallel sets of open-ended questions--one set…
Snijders, T; Smeets, J S J; van Kranenburg, J; Kies, A K; van Loon, L J C; Verdijk, L B
2016-02-01
Muscle fibre hypertrophy is accompanied by an increase in myonuclear number, an increase in myonuclear domain size or both. It has been suggested that increases in myonuclear domain size precede myonuclear accretion and subsequent muscle fibre hypertrophy during prolonged exercise training. In this study, we assessed the changes in muscle fibre size, myonuclear and satellite cell content throughout 12 weeks of resistance-type exercise training in young men. Twenty-two young men (23 ± 1 year) were assigned to a progressive, 12-weeks resistance-type exercise training programme (3 sessions per week). Muscle biopsies from the vastus lateralis muscle were taken before and after 2, 4, 8 and 12 weeks of exercise training. Muscle fibre size, myonuclear content, myonuclear domain size and satellite cell content were assessed by immunohistochemistry. Type I and type II muscle fibre size increased gradually throughout the 12 weeks of training (type I: 18 ± 5%, type II: 41 ± 6%, P < 0.01). Myonuclear content increased significantly over time in both the type I (P < 0.01) and type II (P < 0.001) muscle fibres. No changes in type I and type II myonuclear domain size were observed at any time point throughout the intervention. Satellite cell content increased significantly over time in both type I and type II muscle fibres (P < 0.001). Increases in myonuclear domain size do not appear to drive myonuclear accretion and muscle fibre hypertrophy during prolonged resistance-type exercise training in vivo in humans. © 2015 Scandinavian Physiological Society. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Hsieh, Bieng-Zih; Lewis, Charles; Lin, Zsay-Shing
2005-04-01
The purpose of this study is to construct a fuzzy lithology system from well logs to identify formation lithology of a groundwater aquifer system in order to better apply conventional well logging interpretation in hydro-geologic studies because well log responses of aquifers are sometimes different from those of conventional oil and gas reservoirs. The input variables for this system are the gamma-ray log reading, the separation between the spherically focused resistivity and the deep very-enhanced resistivity curves, and the borehole compensated sonic log reading. The output variable is groundwater formation lithology. All linguistic variables are based on five linguistic terms with a trapezoidal membership function. In this study, 50 data sets are clustered into 40 training sets and 10 testing sets for constructing the fuzzy lithology system and validating the ability of system prediction, respectively. The rule-based database containing 12 fuzzy lithology rules is developed from the training data sets, and the rule strength is weighted. A Madani inference system and the bisector of area defuzzification method are used for fuzzy inference and defuzzification. The success of training performance and the prediction ability were both 90%, with the calculated correlation of training and testing equal to 0.925 and 0.928, respectively. Well logs and core data from a clastic aquifer (depths 100-198 m) in the Shui-Lin area of west-central Taiwan are used for testing the system's construction. Comparison of results from core analysis, well logging and the fuzzy lithology system indicates that even though the well logging method can easily define a permeable sand formation, distinguishing between silts and sands and determining grain size variation in sands is more subjective. These shortcomings can be improved by a fuzzy lithology system that is able to yield more objective decisions than some conventional methods of log interpretation.
Dellaserra, Carla L; Gao, Yong; Ransdell, Lynda
2014-02-01
Integrated technology (IT), which includes accelerometers, global positioning systems (GPSs), and heart rate monitors, has been used frequently in public health. More recently, IT data have been used in sports settings to assess training and performance demands. However, the impact of IT in sports settings is yet to be evaluated, particularly in field-based team sports. This narrative-qualitative review provides an overview of the emerging impact of IT in sports settings. Twenty electronic databases (e.g., Medline, SPORTdiscus, and ScienceDirect), print publications (e.g., Signal Processing Magazine and Catapult Innovations news releases), and internet resources were searched using different combinations of keywords as follows: accelerometers, heart rate monitors, GPS, sport training, and field-based sports for relevant articles published from 1990 to the present. A total of 114 publications were identified, and 39 that examined a field-based team sport using a form of IT were analyzed. The articles chosen for analysis examined a field-based team sport using a form of IT. The uses of IT can be divided into 4 categories: (a) quantifying movement patterns (n = 22), (b) assessing the differences between demands of training and competition (n = 12), (c) measuring physiological and metabolic responses (n = 16), and (d) determining a valid definition for velocity and a sprint effort (n = 8). Most studies used elite adult male athletes as participants and analyzed the sports of Australian Rules football, field hockey, cricket, and soccer, with sample sizes between 5 and 20 participants. The limitations of IT in a sports setting include scalability issues, cost, and the inability to receive signals within indoor environments. Integrated technology can contribute to significant improvements in the preparation, training, and recovery aspects of field-based team sports. Future research should focus on using IT with female athlete populations and developing resources to use IT indoors to further enhance individual and team performances.
Performing at the Top of One's Musical Game
Hatfield, Johannes L.
2016-01-01
The purpose of the present mixed method study was to investigate personal benefits, perceptions, and the effect of a 15-week sport psychological skills training program adapted for musicians. The program was individually tailored for six music performance students with the objective of facilitating the participants' instrumental practice and performance. The participants learnt techniques such as goal setting, attentional focus, arousal regulation, imagery, and acceptance training/self-talk. Zimmerman's (1989) cyclical model of self-regulated learning was applied as a theoretical frame for the intervention. The present study's mixed-method approach (i.e., quan+ QUAL) included effect size, semi-structured interviews, a research log, and practice diaries of the participants (Creswell, 2009). Thematic analysis revealed that participants had little or no experience concerning planning and goal setting in regard to instrumental practice. Concentration, volition, and physical pain were additional issues that the participants struggled with at the time of pre-intervention. The study found that psychological skills training (with special emphasis on planning and goal setting) facilitated cyclical self-regulated learning patterns in the participants. In essence, the intervention was found to facilitate the participants' concentration, self-observation, self-efficacy, and coping in the face of failure. The appliance of practice journals facilitated the participants‘ self-observation, self-evaluation, and awareness of instrumental practice. Finally, the psychological skills intervention reduced participants' worry and anxiety in performance situations. An 8-month follow up interview revealed that the participants were still actively applying psychological skills. PMID:27679586
Classification of pulmonary emphysema from chest CT scans using integral geometry descriptors
NASA Astrophysics Data System (ADS)
van Rikxoort, E. M.; Goldin, J. G.; Galperin-Aizenberg, M.; Brown, M. S.
2011-03-01
To gain insight into the underlying pathways of emphysema and monitor the effect of treatment, methods to quantify and phenotype the different types of emphysema from chest CT scans are of crucial importance. Current standard measures rely on density thresholds for individual voxels, which is influenced by inspiration level and does not take into account the spatial relationship between voxels. Measures based on texture analysis do take the interrelation between voxels into account and therefore might be useful for distinguishing different types of emphysema. In this study, we propose to use Minkowski functionals combined with rotation invariant Gaussian features to distinguish between healthy and emphysematous tissue and classify three different types of emphysema. Minkowski functionals characterize binary images in terms of geometry and topology. In 3D, four Minkowski functionals are defined. By varying the threshold and size of neighborhood around a voxel, a set of Minkowski functionals can be defined for each voxel. Ten chest CT scans with 1810 annotated regions were used to train the method. A set of 108 features was calculated for each training sample from which 10 features were selected to be most informative. A linear discriminant classifier was trained to classify each voxel in the lungs into a subtype of emphysema or normal lung. The method was applied to an independent test set of 30 chest CT scans with varying amounts and types of emphysema with 4347 annotated regions of interest. The method is shown to perform well, with an overall accuracy of 95%.
A selection of giant radio sources from NVSS
Proctor, D. D.
2016-06-01
Results of the application of pattern-recognition techniques to the problem of identifying giant radio sources (GRSs) from the data in the NVSS catalog are presented, and issues affecting the process are explored. Decision-tree pattern-recognition software was applied to training-set source pairs developed from known NVSS large-angular-size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRSs for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation ofmore » $$20^{\\prime} $$ and a minimum component area of 1.87 square arcmin at the 1.4 mJy level. The importance of comparing the resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 ± 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher-ranked sources were visually screened, and a table of over 1600 candidates, including morphological annotation, is presented. These systems include doubles and triples, wide-angle tail and narrow-angle tail, S- or Z-shaped systems, and core-jets and resolved cores. In conclusion, while some resolved-lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.« less
Training set extension for SVM ensemble in P300-speller with familiar face paradigm.
Li, Qi; Shi, Kaiyang; Gao, Ning; Li, Jian; Bai, Ou
2018-03-27
P300-spellers are brain-computer interface (BCI)-based character input systems. Support vector machine (SVM) ensembles are trained with large-scale training sets and used as classifiers in these systems. However, the required large-scale training data necessitate a prolonged collection time for each subject, which results in data collected toward the end of the period being contaminated by the subject's fatigue. This study aimed to develop a method for acquiring more training data based on a collected small training set. A new method was developed in which two corresponding training datasets in two sequences are superposed and averaged to extend the training set. The proposed method was tested offline on a P300-speller with the familiar face paradigm. The SVM ensemble with extended training set achieved 85% classification accuracy for the averaged results of four sequences, and 100% for 11 sequences in the P300-speller. In contrast, the conventional SVM ensemble with non-extended training set achieved only 65% accuracy for four sequences, and 92% for 11 sequences. The SVM ensemble with extended training set achieves higher classification accuracies than the conventional SVM ensemble, which verifies that the proposed method effectively improves the classification performance of BCI P300-spellers, thus enhancing their practicality.
Firm Size, Ownership, Training Duration and Training Evaluation Practices
ERIC Educational Resources Information Center
Asadullah, Muhammad Ali; Peretti, Jean Marie; Ali, Arain Ghulam; Bourgain, Marina
2015-01-01
Purpose: The purpose of this paper was to test the mediating role of training duration in relationship between firm characteristics and training evaluation practices. In this paper, the authors also investigated if this mediating effect differs with respect to the size of the firm. Design/methodology/approach: The authors collected data from 260…
Verstynen, Timothy; Phillips, Jeff; Braun, Emily; Workman, Brett; Schunn, Christian; Schneider, Walter
2012-01-01
Many everyday skills are learned by binding otherwise independent actions into a unified sequence of responses across days or weeks of practice. Here we looked at how the dynamics of action planning and response binding change across such long timescales. Subjects (N = 23) were trained on a bimanual version of the serial reaction time task (32-item sequence) for two weeks (10 days total). Response times and accuracy both showed improvement with time, but appeared to be learned at different rates. Changes in response speed across training were associated with dynamic changes in response time variability, with faster learners expanding their variability during the early training days and then contracting response variability late in training. Using a novel measure of response chunking, we found that individual responses became temporally correlated across trials and asymptoted to set sizes of approximately 7 bound responses at the end of the first week of training. Finally, we used a state-space model of the response planning process to look at how predictive (i.e., response anticipation) and error-corrective (i.e., post-error slowing) processes correlated with learning rates for speed, accuracy and chunking. This analysis yielded non-monotonic association patterns between the state-space model parameters and learning rates, suggesting that different parts of the response planning process are relevant at different stages of long-term learning. These findings highlight the dynamic modulation of response speed, variability, accuracy and chunking as multiple movements become bound together into a larger set of responses during sequence learning. PMID:23056630
Edmunds, Sarah; Stephenson, Duncan; Clow, Angela
2013-01-01
Workplaces have potential as a setting for physical activity promotion but evidence of the effectiveness of intervention programmes in small and medium sized enterprises is limited. This paper reports the impact of an intervention which trained existing employees to promote physical activity to their colleagues. Eighty-nine previously low-active employees from 17 small and medium sized organisations participated. A mixed methods evaluation design was used. Quantitative data were collected at baseline and 6 months later using an online questionnaire. Qualitative data from a series of 6 focus groups were analysed. Repeated measures t-tests showed significant increases over time in physical activity, general health rating, satisfaction with life and positive mood states. There were significant decreases in body mass index (BMI), perceived stress, negative mood states and presenteeism. There was no change in absenteeism. Analysis of focus group data provided further insight into the impact of the intervention. Five major themes emerged: awareness of physical activity; sustaining physical activity behaviour change; improved health and well-being; enhanced social networks; and embedding physical activity in the workplace culture. This study shows it is feasible and effective to train employees in small and medium sized enterprises to support their colleagues in physical activity behaviour change.
Barcellona, Massimo G; Morrissey, Matthew C
2016-04-01
The commonly used open kinetic chain knee extensor (OKCKE) exercise loads the sagittal restraints to knee anterior tibial translation. To investigate the effect of different loads of OKCKE resistance training on anterior knee laxity (AKL) in the uninjured knee. non-clinical trial. Randomization into one of three supervised training groups occurred with training 3 times per week for 12 weeks. Subjects in the LOW and HIGH groups performed OKCKE resistance training at loads of 2 sets of 20 repetition maximum (RM) and 20 sets of 2RM, respectively. Subjects in the isokinetic training group (ISOK) performed isokinetic OKCKE resistance training using 2 sets of 20 maximal efforts. AKL was measured using the KT2000 arthrometer with concurrent measurement of lateral hamstrings muscle activity at baseline, 6 weeks and 12 weeks. Twenty six subjects participated (LOW n = 9, HIGH n = 10, ISOK n = 7). The main finding from this study is that a 12-week OKCKE resistance training programme at loads of 20 sets of 2RM, leads to an increase in manual maximal AKL. OKCKE resistance training at high loads (20 sets of 2RM) increases AKL while low load OKCKE resistance training (2 sets of 20RM) and isokinetic OKCKE resistance training at 2 sets of 20RM does not. Copyright © 2015 Elsevier Ltd. All rights reserved.
Portfolio of automated trading systems: complexity and learning set size issues.
Raudys, Sarunas
2013-03-01
In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
The Effects of Aquatic Plyometric Training on Repeated Jumps, Drop Jumps and Muscle Damage.
Jurado-Lavanant, A; Alvero-Cruz, J R; Pareja-Blanco, F; Melero-Romero, C; Rodríguez-Rosell, D; Fernandez-Garcia, J C
2015-09-22
The purpose of this study was to compare the effects of land- vs. aquatic based plyometric training programs on the drop jump, repeated jump performance and muscle damage. Sixty-five male students were randomly assigned to one of 3 groups: aquatic plyometric training group (APT), plyometric training group (PT) and control group (CG). Both experimental groups trained twice a week for 10 weeks performing the same number of sets and total jumps. The following variables were measured prior to, halfway through and after the training programs: creatine kinase (CK) concentration, maximal height during a drop jump from the height of 30 (DJ30) and 50 cm (DJ50), and mean height during a repeated vertical jump test (RJ). The training program resulted in a significant increase (P<0.01-0.001) in RJ, DJ30, and DJ50 for PT, whereas neither APT nor CG reached any significant improvement APT showed likely/possibly improvements on DJ30 and DJ50, respectively. Greater intra-group Effect Size in CK was found for PT when compared to APT. In conclusion, although APT seems to be a safe alternative method for reducing the stress produced on the musculoskeletal system by plyometric training, PT produced greater gains on reactive jumps performance than APT. © Georg Thieme Verlag KG Stuttgart · New York.
McKenna, James E.
2005-01-01
Diversity and fish productivity are important measures of the health and status of aquatic systems. Being able to predict the values of these indices as a function of environmental variables would be valuable to management. Diversity and productivity have been related to environmental conditions by multiple linear regression and discriminant analysis, but such methods have several shortcomings. In an effort to predict fish species diversity and estimate salmonid production for streams in the eastern basin of Lake Ontario, I constructed neural networks and trained them on a data set containing abiotic information and either fish diversity or juvenile salmonid abundance. Twenty percent of the original data were retained as a test data set and used in the training. The ability to extend these neural networks to conditions throughout the streams was tested with data not involved in the network training. The resulting neural networks were able to predict the number of salmonids with more than 84% accuracy and diversity with more than 73% accuracy, which was far superior to the performance of multiple regression. The networks also identified the environmental variables with the greatest predictive power, namely, those describing water movement, stream size, and water chemistry. Thirteen input variables were used to predict diversity and 17 to predict salmonid abundance.
Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut
2009-01-01
Background Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx™ Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Methods Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. Results The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. Conclusions The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be “at risk” using the clinical factors. PMID:20144324
Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut
2009-07-01
Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be "at risk" using the clinical factors. Copyright 2009 Diabetes Technology Society.
Development of automatic body condition scoring using a low-cost 3-dimensional Kinect camera.
Spoliansky, Roii; Edan, Yael; Parmet, Yisrael; Halachmi, Ilan
2016-09-01
Body condition scoring (BCS) is a farm-management tool for estimating dairy cows' energy reserves. Today, BCS is performed manually by experts. This paper presents a 3-dimensional algorithm that provides a topographical understanding of the cow's body to estimate BCS. An automatic BCS system consisting of a Kinect camera (Microsoft Corp., Redmond, WA) triggered by a passive infrared motion detector was designed and implemented. Image processing and regression algorithms were developed and included the following steps: (1) image restoration, the removal of noise; (2) object recognition and separation, identification and separation of the cows; (3) movie and image selection, selection of movies and frames that include the relevant data; (4) image rotation, alignment of the cow parallel to the x-axis; and (5) image cropping and normalization, removal of irrelevant data, setting the image size to 150×200 pixels, and normalizing image values. All steps were performed automatically, including image selection and classification. Fourteen individual features per cow, derived from the cows' topography, were automatically extracted from the movies and from the farm's herd-management records. These features appear to be measurable in a commercial farm. Manual BCS was performed by a trained expert and compared with the output of the training set. A regression model was developed, correlating the features with the manual BCS references. Data were acquired for 4 d, resulting in a database of 422 movies of 101 cows. Movies containing cows' back ends were automatically selected (389 movies). The data were divided into a training set of 81 cows and a test set of 20 cows; both sets included the identical full range of BCS classes. Accuracy tests gave a mean absolute error of 0.26, median absolute error of 0.19, and coefficient of determination of 0.75, with 100% correct classification within 1 step and 91% correct classification within a half step for BCS classes. Results indicated good repeatability, with all standard deviations under 0.33. The algorithm is independent of the background and requires 10 cows for training with approximately 30 movies of 4 s each. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Accelerated Training for Large Feedforward Neural Networks
NASA Technical Reports Server (NTRS)
Stepniewski, Slawomir W.; Jorgensen, Charles C.
1998-01-01
In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.
Hides, Julie A; Walsh, Jazmin C; Smith, Melinda M Franettovich; Mendis, M Dilani
2017-07-01
Low back pain (LBP) and lower limb injuries are common among Australian Football League (AFL) players. Smaller size of 1 key trunk muscle, the lumbar multifidus (MF), has been associated with LBP and injuries in footballers. The size of the MF muscle has been shown to be modifiable with supervised motor-control training programs. Among AFL players, supervised motor-control training has also been shown to reduce the incidence of lower limb injuries and was associated with increased player availability for games. However, the effectiveness of a self-managed MF exercise program is unknown. To investigate the effect of self-managed exercises and fitness and strength training on MF muscle size in AFL players with or without current LBP. Cross-sectional study. Professional AFL context. Complete data were available for 242 players from 6 elite AFL clubs. Information related to the presence of LBP and history of injury was collected at the start of the preseason. At the end of the preseason, data were collected regarding performance of MF exercises as well as fitness and strength training. Ultrasound imaging of the MF muscle was conducted at the start and end of the preseason. Size of the MF muscles. An interaction effect was found between performance of MF exercises and time (F = 13.89, P ≤ .001). Retention of MF muscle size was greatest in players who practiced the MF exercises during the preseason (F = 4.77, P = .03). Increased adherence to fitness and strength training was associated with retained MF muscle size over the preseason (F = 5.35, P = .02). Increased adherence to a self-administered MF exercise program and to fitness and strength training was effective in maintaining the size of the MF muscle in the preseason.
A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine
Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini
2013-01-01
A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136
Roig-Casasús, Sergio; María Blasco, José; López-Bueno, Laura; Blasco-Igual, María Clara
2017-03-01
Sensorimotor training has proven to be an efficient approach for recovering balance control following total knee replacement (TKR). The purpose of this trial was to evaluate the influence of specific balance-targeted training using a dynamometric platform on the overall state of balance in older adults undergoing TKR. This was a randomized controlled clinical trial conducted at a university hospital rehabilitation unit. Patients meeting the inclusion criteria were randomly assigned to a control group or an experimental group. Both groups participated in the same 4-week postoperative rehabilitation training protocol. Participants in the experimental group performed additional balance training with a dynamometric platform consisting of tests related to stability challenges, weight-shifting, and moving to the limits of stability. The primary outcome measure was the overall state of balance rated according to the Berg Balance Scale. Secondary outcomes in terms of balance were the Timed Up and Go Test, Functional Reach Test, and Romberg open and closed-eyes tests. Data processing included between-group analysis of covariance, minimal detectable change assessment for the primary outcome measure, and effect size estimation. Confidence intervals (CIs) were set at 95%. Forty-three participants meeting the inclusion criteria and having signed the informed consent were randomly assigned to 2 groups. Thirty-seven completed the training (86.1%). Significant between-group differences in balance performance were found as measured with the Berg Balance Scale (P = .03) and Functional Reach Test (P = .04) with a CI = 95%. Significant differences were not recorded for the Timed Up and Go Test or Romberg open and closed-eyes tests (P > .05). Furthermore, Cohen's effect size resulted in a value of d = 0.97, suggesting a high practical significance of the trial. According to the Berg Balance Scale and Functional Reach Test, participants with TKR who have followed a 4-week training program using a dynamometric platform improved balance performance to a higher extent than a control group training without such a device. The inclusion of this instrument in the functional training protocol may be beneficial for recovering balance following TKR.
Belcher, Annabelle M; Harrington, Rebecca A; Malkova, Ludise; Mishkin, Mortimer
2006-01-01
Earlier studies found that recognition memory for object-place associations was impaired in patients with relatively selective hippocampal damage (Vargha-Khadem et al., Science 1997; 277:376-380), but was unaffected after selective hippocampal lesions in monkeys (Malkova and Mishkin, J Neurosci 2003; 23:1956-1965). A potentially important methodological difference between the two studies is that the patients were required to remember a set of 20 object-place associations for several minutes, whereas the monkeys had to remember only two such associations at a time, and only for a few seconds. To approximate more closely the task given to the patients, we trained monkeys on several successive sets of 10 object-place pairs each, with each set requiring learning across days. Despite the increased associative memory demands, monkeys given hippocampal lesions were unimpaired relative to their unoperated controls, suggesting that differences other than set size and memory duration underlie the different outcomes in the human and animal studies. (c) 2005 Wiley-Liss, Inc.
SKILLED BIMANUAL TRAINING DRIVES MOTOR CORTEX PLASTICITY IN CHILDREN WITH UNILATERAL CEREBRAL PALSY
Friel, Kathleen M.; Kuo, Hsing-Ching; Fuller, Jason; Ferre, Claudio L.; Brandão, Marina; Carmel, Jason B.; Bleyenheuft, Yannick; Gowatsky, Jaimie L.; Stanford, Arielle D.; Rowny, Stefan B.; Luber, Bruce; Bassi, Bruce; Murphy, David LK; Lisanby, Sarah H.; Gordon, Andrew M.
2015-01-01
Background Intensive bimanual therapy can improve hand function in children with unilateral spastic cerebral palsy (USCP). We compared the effects of structured bimanual skill training vs. unstructured bimanual practice on motor outcomes and motor map plasticity in children with USCP. Objective We hypothesized that structured skill training would produce greater motor map plasticity than unstructured practice. Methods Twenty children with USCP (average age 9,5; 12 males) received therapy in a day-camp-setting, 6 h/day, 5 days/week, for 3 weeks. In structured skill training (n=10), children performed progressively more difficult movements and practiced functional goals. In unstructured practice (n=10), children engaged in bimanual activities but did not practice skillful movements or functional goals. We used the Assisting Hand Assessment (AHA), Jebsen-Taylor test of Hand Function (JTTHF) and Canadian Occupational Performance Measure (COPM) to measure hand function. We used single-pulse transcranial magnetic stimulation (TMS) to map the representation of first dorsal interosseous (FDI) and flexor carpi radialis (FCR) muscles bilaterally. Results Both groups showed significant improvements in bimanual hand use (AHA; p<0.05) and hand dexterity (JTTHF; p<0.001). However, only the structured skill group showed increases in the size of the affected hand motor map and amplitudes of motor evoked potentials (p<0.01). Most children who showed the most functional improvements (COPM) had the largest changes in map size. Conclusions These findings uncover a dichotomy of plasticity: the unstructured practice group improved hand function but did not show changes in motor maps. Skill training is important for driving motor cortex plasticity in children with USCP. PMID:26867559
NASA Technical Reports Server (NTRS)
Laughlin, Mitzi S.; Murray, Jocelyn D.; Lee, Lesley R.; Wear, Mary L.; Van Baalen, Mary
2017-01-01
During a spacewalk, designated as extravehicular activity (EVA), an astronaut ventures from the protective environment of the spacecraft into the vacuum of space. EVAs are among the most challenging tasks during a mission, as they are complex and place the astronaut in a highly stressful environment dependent on the spacesuit for survival. Due to the complexity of EVA, NASA has conducted various training programs on Earth to mimic the environment of space and to practice maneuvers in a more controlled and forgiving environment. However, rewards offset the risks of EVA, as some of the greatest accomplishments in the space program were accomplished during EVA, such as the Apollo moonwalks and the Hubble Space Telescope repair missions. Water has become the environment of choice for EVA training on Earth, using neutral buoyancy as a substitute for microgravity. During EVA training, an astronaut wears a modified version of the spacesuit adapted for working in water. This high fidelity suit allows the astronaut to move in the water while performing tasks on full-sized mockups of space vehicles, telescopes, and satellites. During the early Gemini missions, several EVA objectives were much more difficult than planned and required additional time. Later missions demonstrated that "complex (EVA) tasks were feasible when restraints maintained body position and underwater simulation training ensured a high success probability".1,2 EVA training has evolved from controlling body positioning to perform basic tasks to complex maintenance of the Hubble Space Telescope and construction of the International Space Station (ISS). Today, preparation is centered at special facilities built specifically for EVA training, such as the Neutral Buoyancy Laboratory (NBL) at NASA's Johnson Space Center ([JSC], Houston) and the Hydrolab at the Gagarin Cosmonaut Training Centre ([GCTC], Star City, outside Moscow). Underwater training for an EVA is also considered hazardous duty for NASA astronauts. This activity places astronauts at risk for decompression sickness and barotrauma as well as various musculoskeletal disorders from working in the spacesuit. The medical, operational and research communities over the years have requested access to EVA training data to better understand the risks. As a result of these requests, epidemiologists within the Lifetime Surveillance of Astronaut Health (LSAH) team have compiled records from numerous EVA training venues to quantify the exposure to EVA training. The EVA Suit Exposure Tracker (EVA SET) dataset is a compilation of ground-based training activities using the extravehicular mobility unit (EMU) in neutrally buoyant pools to enhance EVA performance on orbit. These data can be used by the current ISS program and future exploration missions by informing physicians, researchers, and operational personnel on the risks of EVA training in order that future suit and mission designs incorporate greater safety. The purpose of this technical report is to document briefly the various facilities where NASA astronauts have performed EVA training while describing in detail the EVA training records used to generate the EVA SET dataset.
A comparative study of two hazard handling training methods for novice drivers.
Wang, Y B; Zhang, W; Salvendy, G
2010-10-01
The effectiveness of two hazard perception training methods, simulation-based error training (SET) and video-based guided error training (VGET), for novice drivers' hazard handling performance was tested, compared, and analyzed. Thirty-two novice drivers participated in the hazard perception training. Half of the participants were trained using SET by making errors and/or experiencing accidents while driving with a desktop simulator. The other half were trained using VGET by watching prerecorded video clips of errors and accidents that were made by other people. The two groups had exposure to equal numbers of errors for each training scenario. All the participants were tested and evaluated for hazard handling on a full cockpit driving simulator one week after training. Hazard handling performance and hazard response were measured in this transfer test. Both hazard handling performance scores and hazard response distances were significantly better for the SET group than the VGET group. Furthermore, the SET group had more metacognitive activities and intrinsic motivation. SET also seemed more effective in changing participants' confidence, but the result did not reach the significance level. SET exhibited a higher training effectiveness of hazard response and handling than VGET in the simulated transfer test. The superiority of SET might benefit from the higher levels of metacognition and intrinsic motivation during training, which was observed in the experiment. Future research should be conducted to assess whether the advantages of error training are still effective under real road conditions.
Janet, Jon Paul; Kulik, Heather J
2017-11-22
Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.
Analysis of classifiers performance for classification of potential microcalcification
NASA Astrophysics Data System (ADS)
M. N., Arun K.; Sheshadri, H. S.
2013-07-01
Breast cancer is a significant public health problem in the world. According to the literature early detection improve breast cancer prognosis. Mammography is a screening tool used for early detection of breast cancer. About 10-30% cases are missed during the routine check as it is difficult for the radiologists to make accurate analysis due to large amount of data. The Microcalcifications (MCs) are considered to be important signs of breast cancer. It has been reported in literature that 30% - 50% of breast cancer detected radio graphically show MCs on mammograms. Histologic examinations report 62% to 79% of breast carcinomas reveals MCs. MC are tiny, vary in size, shape, and distribution, and MC may be closely connected to surrounding tissues. There is a major challenge using the traditional classifiers in the classification of individual potential MCs as the processing of mammograms in appropriate stage generates data sets with an unequal amount of information for both classes (i.e., MC, and Not-MC). Most of the existing state-of-the-art classification approaches are well developed by assuming the underlying training set is evenly distributed. However, they are faced with a severe bias problem when the training set is highly imbalanced in distribution. This paper addresses this issue by using classifiers which handle the imbalanced data sets. In this paper, we also compare the performance of classifiers which are used in the classification of potential MC.
Measuring sperm backflow following female orgasm: a new method
King, Robert; Dempsey, Maria; Valentine, Katherine A.
2016-01-01
Background Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. Method A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. Results The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. Conclusions This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size. PMID:27799082
Measuring sperm backflow following female orgasm: a new method.
King, Robert; Dempsey, Maria; Valentine, Katherine A
2016-01-01
Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size.
Neural Network-Based Sensor Validation for Turboshaft Engines
NASA Technical Reports Server (NTRS)
Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei
1998-01-01
Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.
Wham, George S.; Saunders, Ruth; Mensch, James
2010-01-01
Abstract Context: Research suggests that appropriate medical care for interscholastic athletes is frequently lacking. However, few investigators have examined factors related to care. Objective: To examine medical care provided by interscholastic athletics programs and to identify factors associated with variations in provision of care. Design: Cross-sectional study. Setting: Mailed and e-mailed survey. Patients or Other Participants: One hundred sixty-six South Carolina high schools. Intervention(s): The 132-item Appropriate Medical Care Assessment Tool (AMCAT) was developed and pilot tested. It included 119 items assessing medical care based on the Appropriate Medical Care for Secondary School-Age Athletes (AMCSSAA) Consensus Statement and Monograph (test-retest reliability: r = 0.89). Also included were items assessing potential influences on medical care. Presence, source, and number of athletic trainers; school size; distance to nearest medical center; public or private status; sports medicine supply budget; and varsity football regional championships served as explanatory variables, whereas the school setting, region of state, and rate of free or reduced lunch qualifiers served as control variables. Main Outcome Measure(s): The Appropriate Care Index (ACI) score from the AMCAT provided a quantitative measure of medical care and served as the response variable. The ACI score was determined based on a school's response to items relating to AMCSSAA guidelines. Results: Regression analysis revealed associations with ACI score for athletic training services and sports medicine supply budget (both P < .001) when controlling for the setting, region, and rate of free or reduced lunch qualifiers. These 2 variables accounted for 30% of the variance in ACI score (R2 = 0.302). Post hoc analysis showed differences between ACI score based on the source of the athletic trainer and the size of the sports medicine supply budget. Conclusions: The AMCAT offers an evaluation of medical care provided by interscholastic athletics programs. In South Carolina schools, athletic training services and the sports medicine supply budget were associated with higher levels of medical care. These results offer guidance for improving the medical care provided for interscholastic athletes. PMID:20064052
Fogg, Louis; Ocampo, Edith V; Acosta, Diana I
2016-01-01
Background Parent training programs are traditionally delivered in face-to-face formats and require trained facilitators and weekly parent attendance. Implementing face-to-face sessions is challenging in busy primary care settings and many barriers exist for parents to attend these sessions. Tablet-based delivery of parent training offers an alternative to face-to-face delivery to make parent training programs easier to deliver in primary care settings and more convenient and accessible to parents. We adapted the group-based Chicago Parent Program (CPP) to be delivered as a self-administered, tablet-based program called the ez Parentprogram. Objective The purpose of this study was to (1) assess the feasibility of the ez Parentprogram by examining parent satisfaction with the program and the percent of modules completed, (2) test the efficacy of the ez Parentprogram by examining the effects compared with a control condition for improving parenting and child behavior in a sample of low-income ethnic minority parents of young children recruited from a primary care setting, and (3) compare program completion and efficacy with prior studies of the group-based CPP. Methods The study used a two-group randomized controlled trial (RCT) design with repeated measures follow up. Subjects (n=79) were randomly assigned to an intervention or attention control condition. Data collection was at baseline and 12 and 24 weeks post baseline. Parents were recruited from a large, urban, primary care pediatric clinic. ez Parentmodule completion was calculated as the percentage of the six modules completed by the intervention group parents. Attendance in the group-based CPP was calculated as the percentage of attendance at sessions 1 through 10. Satisfaction data were summarized using item frequencies. Parent and child data were analyzed using a repeated measures analysis of variance (RM-ANOVA) with simple contrasts to determine if there were significant intervention effects on the outcome measures. Effect sizes for between group comparisons were calculated for all outcome variables and compared with CPP group based archival data. Results ez Parentmodule completion rate was 85.4% (34.2/40; 95% confidence interval [CI] = 78.4%-93.7%) and was significantly greater ( P<.05) than face-to-face CPP group attendance (135.2/267, 50.6%) attendance of sessions; 95% CI = 46.8%-55.6%). ez Parentparticipants reported the program as very helpful (35/40, 88.0%) and they would highly recommend the program (33/40, 82.1%) to another parent. ez Parentparticipants showed greater improvements in parenting warmth (F1,77 = 4.82, P<.05) from time 1 to 3. No other significant differences were found. Cohen’s d effect sizes for intervention group improvements in parenting warmth, use of corporal punishment, follow through, parenting stress, and intensity of child behavior problems were comparable or greater than those of the group-based CPP. Conclusions Data from this study indicate the feasibility and acceptability of the ez Parentprogram in a low-income, ethnic minority population of parents and comparable effect sizes with face-to-face delivery for parents. PMID:27098111
Single- vs. Multiple-Set Strength Training in Women.
ERIC Educational Resources Information Center
Schlumberger, Andreas; Stec, Justyna; Schmidtbleicher, Dietmar
2001-01-01
Compared the effects of single- and multiple-set strength training in women with basic experience in resistance training. Both training groups had significant strength improvements in leg extension. In the seated bench press, only the three-set group showed a significant increase in maximal strength. There were higher strength gains overall in the…
NASA Astrophysics Data System (ADS)
Su, Lihong
In remote sensing communities, support vector machine (SVM) learning has recently received increasing attention. SVM learning usually requires large memory and enormous amounts of computation time on large training sets. According to SVM algorithms, the SVM classification decision function is fully determined by support vectors, which compose a subset of the training sets. In this regard, a solution to optimize SVM learning is to efficiently reduce training sets. In this paper, a data reduction method based on agglomerative hierarchical clustering is proposed to obtain smaller training sets for SVM learning. Using a multiple angle remote sensing dataset of a semi-arid region, the effectiveness of the proposed method is evaluated by classification experiments with a series of reduced training sets. The experiments show that there is no loss of SVM accuracy when the original training set is reduced to 34% using the proposed approach. Maximum likelihood classification (MLC) also is applied on the reduced training sets. The results show that MLC can also maintain the classification accuracy. This implies that the most informative data instances can be retained by this approach.
Handwritten word preprocessing for database adaptation
NASA Astrophysics Data System (ADS)
Oprean, Cristina; Likforman-Sulem, Laurence; Mokbel, Chafic
2013-01-01
Handwriting recognition systems are typically trained using publicly available databases, where data have been collected in controlled conditions (image resolution, paper background, noise level,...). Since this is not often the case in real-world scenarios, classification performance can be affected when novel data is presented to the word recognition system. To overcome this problem, we present in this paper a new approach called database adaptation. It consists of processing one set (training or test) in order to adapt it to the other set (test or training, respectively). Specifically, two kinds of preprocessing, namely stroke thickness normalization and pixel intensity normalization are considered. The advantage of such approach is that we can re-use the existing recognition system trained on controlled data. We conduct several experiments with the Rimes 2011 word database and with a real-world database. We adapt either the test set or the training set. Results show that training set adaptation achieves better results than test set adaptation, at the cost of a second training stage on the adapted data. Accuracy of data set adaptation is increased by 2% to 3% in absolute value over no adaptation.
Applying deep neural networks to HEP job classification
NASA Astrophysics Data System (ADS)
Wang, L.; Shi, J.; Yan, X.
2015-12-01
The cluster of IHEP computing center is a middle-sized computing system which provides 10 thousands CPU cores, 5 PB disk storage, and 40 GB/s IO throughput. Its 1000+ users come from a variety of HEP experiments. In such a system, job classification is an indispensable task. Although experienced administrator can classify a HEP job by its IO pattern, it is unpractical to classify millions of jobs manually. We present how to solve this problem with deep neural networks in a supervised learning way. Firstly, we built a training data set of 320K samples by an IO pattern collection agent and a semi-automatic process of sample labelling. Then we implemented and trained DNNs models with Torch. During the process of model training, several meta-parameters was tuned with cross-validations. Test results show that a 5- hidden-layer DNNs model achieves 96% precision on the classification task. By comparison, it outperforms a linear model by 8% precision.
Boughner, Robert L; Papini, Mauricio R
2008-05-01
Results from a variety of independently run experiments suggest that latent inhibition (LI) and the partial reinforcement extinction effect (PREE) share underlying mechanisms. Experiment 1 tested this LI=PREE hypothesis by training the same set of rats in situations involving both nonreinforced preexposure to the conditioned stimulus (LI stage) and partial reinforcement training (PREE stage). Control groups were also included to assess both LI and the PREE. The results demonstrated a significant, but negative correlation between the size of the LI effect and that of the PREE. Experiment 2 extended this analysis to the effects on LI and the PREE of the anxiolytic benzodiazepine chlordiazepoxide (5 mg/kg, i.p.). Whereas chlordiazepoxide had no effect on LI, it delayed the onset of the PREE. No evidence in support of the LI=PREE hypothesis was obtained when these two learning phenomena were compared within the same experiment and under the same general conditions of training.
Comparing supervised learning techniques on the task of physical activity recognition.
Dalton, A; OLaighin, G
2013-01-01
The objective of this study was to compare the performance of base-level and meta-level classifiers on the task of physical activity recognition. Five wireless kinematic sensors were attached to each subject (n = 25) while they completed a range of basic physical activities in a controlled laboratory setting. Subjects were then asked to carry out similar self-annotated physical activities in a random order and in an unsupervised environment. A combination of time-domain and frequency-domain features were extracted from the sensor data including the first four central moments, zero-crossing rate, average magnitude, sensor cross-correlation, sensor auto-correlation, spectral entropy and dominant frequency components. A reduced feature set was generated using a wrapper subset evaluation technique with a linear forward search and this feature set was employed for classifier comparison. The meta-level classifier AdaBoostM1 with C4.5 Graft as its base-level classifier achieved an overall accuracy of 95%. Equal sized datasets of subject independent data and subject dependent data were used to train this classifier and high recognition rates could be achieved without the need for user specific training. Furthermore, it was found that an accuracy of 88% could be achieved using data from the ankle and wrist sensors only.
Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Guinard, S.; Landrieu, L.
2017-05-01
We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.
Thinking Outside of Outpatient: Underutilized Settings for Psychotherapy Education.
Blumenshine, Philip; Lenet, Alison E; Havel, Lauren K; Arbuckle, Melissa R; Cabaniss, Deborah L
2017-02-01
Although psychiatry residents are expected to achieve competency in conducting psychotherapy during their training, it is unclear how psychotherapy teaching is integrated across diverse clinical settings. Between January and March 2015, 177 psychiatry residency training directors were sent a survey asking about psychotherapy training practices in their programs, as well as perceived barriers to psychotherapy teaching. Eighty-two training directors (44%) completed the survey. While 95% indicated that psychotherapy was a formal learning objective for outpatient clinic rotations, fifty percent or fewer noted psychotherapy was a learning objective in other settings. Most program directors would like to see psychotherapy training included (particularly supportive psychotherapy and cognitive behavioral therapy) on inpatient (82%) and consultation-liaison settings (57%). The most common barriers identified to teaching psychotherapy in these settings were time and perceived inadequate staff training and interest. Non-outpatient rotations appear to be an underutilized setting for psychotherapy teaching.
Prediction of conformationally dependent atomic multipole moments in carbohydrates
Cardamone, Salvatore
2015-01-01
The conformational flexibility of carbohydrates is challenging within the field of computational chemistry. This flexibility causes the electron density to change, which leads to fluctuating atomic multipole moments. Quantum Chemical Topology (QCT) allows for the partitioning of an “atom in a molecule,” thus localizing electron density to finite atomic domains, which permits the unambiguous evaluation of atomic multipole moments. By selecting an ensemble of physically realistic conformers of a chemical system, one evaluates the various multipole moments at defined points in configuration space. The subsequent implementation of the machine learning method kriging delivers the evaluation of an analytical function, which smoothly interpolates between these points. This allows for the prediction of atomic multipole moments at new points in conformational space, not trained for but within prediction range. In this work, we demonstrate that the carbohydrates erythrose and threose are amenable to the above methodology. We investigate how kriging models respond when the training ensemble incorporating multiple energy minima and their environment in conformational space. Additionally, we evaluate the gains in predictive capacity of our models as the size of the training ensemble increases. We believe this approach to be entirely novel within the field of carbohydrates. For a modest training set size of 600, more than 90% of the external test configurations have an error in the total (predicted) electrostatic energy (relative to ab initio) of maximum 1 kJ mol−1 for open chains and just over 90% an error of maximum 4 kJ mol−1 for rings. © 2015 Wiley Periodicals, Inc. PMID:26547500
Prediction of conformationally dependent atomic multipole moments in carbohydrates.
Cardamone, Salvatore; Popelier, Paul L A
2015-12-15
The conformational flexibility of carbohydrates is challenging within the field of computational chemistry. This flexibility causes the electron density to change, which leads to fluctuating atomic multipole moments. Quantum Chemical Topology (QCT) allows for the partitioning of an "atom in a molecule," thus localizing electron density to finite atomic domains, which permits the unambiguous evaluation of atomic multipole moments. By selecting an ensemble of physically realistic conformers of a chemical system, one evaluates the various multipole moments at defined points in configuration space. The subsequent implementation of the machine learning method kriging delivers the evaluation of an analytical function, which smoothly interpolates between these points. This allows for the prediction of atomic multipole moments at new points in conformational space, not trained for but within prediction range. In this work, we demonstrate that the carbohydrates erythrose and threose are amenable to the above methodology. We investigate how kriging models respond when the training ensemble incorporating multiple energy minima and their environment in conformational space. Additionally, we evaluate the gains in predictive capacity of our models as the size of the training ensemble increases. We believe this approach to be entirely novel within the field of carbohydrates. For a modest training set size of 600, more than 90% of the external test configurations have an error in the total (predicted) electrostatic energy (relative to ab initio) of maximum 1 kJ mol(-1) for open chains and just over 90% an error of maximum 4 kJ mol(-1) for rings. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming
2017-12-01
Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and effectively handle different types of input information to perform large-scale geostatistical modelling.
Specific Stimuli Induce Specific Adaptations: Sensorimotor Training vs. Reactive Balance Training
Freyler, Kathrin; Krause, Anne; Gollhofer, Albert; Ritzmann, Ramona
2016-01-01
Typically, balance training has been used as an intervention paradigm either as static or as reactive balance training. Possible differences in functional outcomes between the two modalities have not been profoundly studied. The objective of the study was to investigate the specificity of neuromuscular adaptations in response to two balance intervention modalities within test and intervention paradigms containing characteristics of both profiles: classical sensorimotor training (SMT) referring to a static ledger pivoting around the ankle joint vs. reactive balance training (RBT) using externally applied perturbations to deteriorate body equilibrium. Thirty-eight subjects were assigned to either SMT or RBT. Before and after four weeks of intervention training, postural sway and electromyographic activities of shank and thigh muscles were recorded and co-contraction indices (CCI) were calculated. We argue that specificity of training interventions could be transferred into corresponding test settings containing properties of SMT and RBT, respectively. The results revealed that i) postural sway was reduced in both intervention groups in all test paradigms; magnitude of changes and effect sizes differed dependent on the paradigm: when training and paradigm coincided most, effects were augmented (P<0.05). ii) These specificities were accompanied by segmental modulations in the amount of CCI, with a greater reduction within the CCI of thigh muscles after RBT compared to the shank muscles after SMT (P<0.05). The results clearly indicate the relationship between test and intervention specificity in balance performance. Hence, specific training modalities of postural control cause multi-segmental and context-specific adaptations, depending upon the characteristics of the trained postural strategy. In relation to fall prevention, perturbation training could serve as an extension to SMT to include the proximal segment, and thus the control of structures near to the body’s centre of mass, into training. PMID:27911944
Akanno, E C; Schenkel, F S; Sargolzaei, M; Friendship, R M; Robinson, J A B
2014-10-01
Genetic improvement of pigs in tropical developing countries has focused on imported exotic populations which have been subjected to intensive selection with attendant high population-wide linkage disequilibrium (LD). Presently, indigenous pig population with limited selection and low LD are being considered for improvement. Given that the infrastructure for genetic improvement using the conventional BLUP selection methods are lacking, a genome-wide selection (GS) program was proposed for developing countries. A simulation study was conducted to evaluate the option of using 60 K SNP panel and observed amount of LD in the exotic and indigenous pig populations. Several scenarios were evaluated including different size and structure of training and validation populations, different selection methods and long-term accuracy of GS in different population/breeding structures and traits. The training set included previously selected exotic population, unselected indigenous population and their crossbreds. Traits studied included number born alive (NBA), average daily gain (ADG) and back fat thickness (BFT). The ridge regression method was used to train the prediction model. The results showed that accuracies of genomic breeding values (GBVs) in the range of 0.30 (NBA) to 0.86 (BFT) in the validation population are expected if high density marker panels are utilized. The GS method improved accuracy of breeding values better than pedigree-based approach for traits with low heritability and in young animals with no performance data. Crossbred training population performed better than purebreds when validation was in populations with similar or a different structure as in the training set. Genome-wide selection holds promise for genetic improvement of pigs in the tropics. © 2014 Blackwell Verlag GmbH.
Haque, M Muksitul; Holder, Lawrence B; Skinner, Michael K
2015-01-01
Environmentally induced epigenetic transgenerational inheritance of disease and phenotypic variation involves germline transmitted epimutations. The primary epimutations identified involve altered differential DNA methylation regions (DMRs). Different environmental toxicants have been shown to promote exposure (i.e., toxicant) specific signatures of germline epimutations. Analysis of genomic features associated with these epimutations identified low-density CpG regions (<3 CpG / 100bp) termed CpG deserts and a number of unique DNA sequence motifs. The rat genome was annotated for these and additional relevant features. The objective of the current study was to use a machine learning computational approach to predict all potential epimutations in the genome. A number of previously identified sperm epimutations were used as training sets. A novel machine learning approach using a sequential combination of Active Learning and Imbalance Class Learner analysis was developed. The transgenerational sperm epimutation analysis identified approximately 50K individual sites with a 1 kb mean size and 3,233 regions that had a minimum of three adjacent sites with a mean size of 3.5 kb. A select number of the most relevant genomic features were identified with the low density CpG deserts being a critical genomic feature of the features selected. A similar independent analysis with transgenerational somatic cell epimutation training sets identified a smaller number of 1,503 regions of genome-wide predicted sites and differences in genomic feature contributions. The predicted genome-wide germline (sperm) epimutations were found to be distinct from the predicted somatic cell epimutations. Validation of the genome-wide germline predicted sites used two recently identified transgenerational sperm epimutation signature sets from the pesticides dichlorodiphenyltrichloroethane (DDT) and methoxychlor (MXC) exposure lineage F3 generation. Analysis of this positive validation data set showed a 100% prediction accuracy for all the DDT-MXC sperm epimutations. Observations further elucidate the genomic features associated with transgenerational germline epimutations and identify a genome-wide set of potential epimutations that can be used to facilitate identification of epigenetic diagnostics for ancestral environmental exposures and disease susceptibility.
HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION
NASA Technical Reports Server (NTRS)
Spirkovska, L.
1994-01-01
Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the first, but it also contains a number of functions to display and edit training and test images. Finally, the third program is an auxiliary program which calculates the included angles for a given input field size. HONTIOR is written in C language, and was originally developed for Sun3 and Sun4 series computers. Both graphic and command line versions of the program are provided. The command line version has been successfully compiled and executed both on computers running the UNIX operating system and on DEC VAX series computer running VMS. The graphic version requires the SunTools windowing environment, and therefore runs only on Sun series computers. The executable for the graphics version of HONTIOR requires 1Mb of RAM. The standard distribution medium for HONTIOR is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The package includes sample input and output data. HONTIOR was developed in 1991. Sun, Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.
Establishing Fire Safety Skills Using Behavioral Skills Training
ERIC Educational Resources Information Center
Houvouras, Andrew J., IV; Harvey, Mark T.
2014-01-01
The use of behavioral skills training (BST) to educate 3 adolescent boys on the risks of lighters and fire setting was evaluated using in situ assessment in a school setting. Two participants had a history of fire setting. After training, all participants adhered to established rules: (a) avoid a deactivated lighter, (b) leave the training area,…
Li, Xiaomeng; Yang, Zhuo
2017-01-01
As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved. PMID:28489933
Albrecht, Johanna S; Bubenzer-Busch, Sarah; Gallien, Anne; Knospe, Eva Lotte; Gaber, Tilman J; Zepf, Florian D
2017-01-01
The aim of this approach was to conduct a structured electroencephalography-based neurofeedback training program for children and adolescents with attention-deficit hyperactivity disorder (ADHD) using slow cortical potentials with an intensive first (almost daily sessions) and second phase of training (two sessions per week) and to assess aspects of attentional performance. A total of 24 young patients with ADHD participated in the 20-session training program. During phase I of training (2 weeks, 10 sessions), participants were trained on weekdays. During phase II, neurofeedback training occurred twice per week (5 weeks). The patients' inattention problems were measured at three assessment time points before (pre, T0) and after (post, T1) the training and at a 6-month follow-up (T2); the assessments included neuropsychological tests (Alertness and Divided Attention subtests of the Test for Attentional Performance; Sustained Attention Dots and Shifting Attentional Set subtests of the Amsterdam Neuropsychological Test) and questionnaire data (inattention subscales of the so-called Fremdbeurteilungsbogen für Hyperkinetische Störungen and Child Behavior Checklist/4-18 [CBCL/4-18]). All data were analyzed retrospectively. The mean auditive reaction time in a Divided Attention task decreased significantly from T0 to T1 (medium effect), which was persistent over time and also found for a T0-T2 comparison (larger effects). In the Sustained Attention Dots task, the mean reaction time was reduced from T0-T1 and T1-T2 (small effects), whereas in the Shifting Attentional Set task, patients were able to increase the number of trials from T1-T2 and significantly diminished the number of errors (T1-T2 & T0-T2, large effects). First positive but very small effects and preliminary results regarding different parameters of attentional performance were detected in young individuals with ADHD. The limitations of the obtained preliminary data are the rather small sample size, the lack of a control group/a placebo condition and the open-label approach because of the clinical setting and retrospective analysis. The value of the current approach lies in providing pilot data for future studies involving larger samples.
Zevin, Boris; Dedy, Nicolas J; Bonrath, Esther M; Grantcharov, Teodor P
2017-05-01
There is no comprehensive simulation-enhanced training curriculum to address cognitive, psychomotor, and nontechnical skills for an advanced minimally invasive procedure. 1) To develop and provide evidence of validity for a comprehensive simulation-enhanced training (SET) curriculum for an advanced minimally invasive procedure; (2) to demonstrate transfer of acquired psychomotor skills from a simulation laboratory to live porcine model; and (3) to compare training outcomes of SET curriculum group and chief resident group. University. This prospective single-blinded, randomized, controlled trial allocated 20 intermediate-level surgery residents to receive either conventional training (control) or SET curriculum training (intervention). The SET curriculum consisted of cognitive, psychomotor, and nontechnical training modules. Psychomotor skills in a live anesthetized porcine model in the OR was the primary outcome. Knowledge of advanced minimally invasive and bariatric surgery and nontechnical skills in a simulated OR crisis scenario were the secondary outcomes. Residents in the SET curriculum group went on to perform a laparoscopic jejunojejunostomy in the OR. Cognitive, psychomotor, and nontechnical skills of SET curriculum group were also compared to a group of 12 chief surgery residents. SET curriculum group demonstrated superior psychomotor skills in a live porcine model (56 [47-62] versus 44 [38-53], P<.05) and superior nontechnical skills (41 [38-45] versus 31 [24-40], P<.01) compared with conventional training group. SET curriculum group and conventional training group demonstrated equivalent knowledge (14 [12-15] versus 13 [11-15], P = 0.47). SET curriculum group demonstrated equivalent psychomotor skills in the live porcine model and in the OR in a human patient (56 [47-62] versus 63 [61-68]; P = .21). SET curriculum group demonstrated inferior knowledge (13 [11-15] versus 16 [14-16]; P<.05), equivalent psychomotor skill (63 [61-68] versus 68 [62-74]; P = .50), and superior nontechnical skills (41 [38-45] versus 34 [27-35], P<.01) compared with chief resident group. Completion of the SET curriculum resulted in superior training outcomes, compared with conventional surgery training. Implementation of the SET curriculum can standardize training for an advanced minimally invasive procedure and can ensure that comprehensive proficiency milestones are met before exposure to patient care. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.
Blended Training for Combat Medics
NASA Technical Reports Server (NTRS)
Fowlkes, Jennifer; Dickinson, Sandra; Lazarus, Todd
2010-01-01
Bleeding from extremity wounds is the number one cause of preventable death on the battlefield and current research stresses the importance of training in preparing every Soldier to use tourniquets. HapMed is designed to provide tourniquet application training to combat medics and Soldiers using a blended training solution encompassing information, demonstration, practice, and feedback. The system combines an instrumented manikin arm, PDA, and computer. The manikin arm provides several training options including stand-alone, hands-on skills training in which soldiers can experience the actual torque required to staunch bleeding from an extremity wound and be timed on tourniquet application. This is more realistic than using a block of wood to act as a limb, which is often how training is conducted today. Combining the manikin arm with the PDA allows instructors to provide scenario based training. In a classroom or field setting, an instructor can specify wound variables such as location, casualty size, and whether the wound is a tough bleed. The PDA also allows more detailed feedback to be provided. Finally, combining the manikin arm with game-based technologies, the third component, provides opportunities to build knowledge and to practice battlefield decision making. Not only do soldiers learn how to apply a tourniquet, but when to apply a tourniquet in combat. The purpose of the paper is to describe the learning science underlying the design of HapMed, illustrate the training system and ways it is being expanded to encompass other critical life-saving tasks, and report on feedback received from instructors and trainees at military training and simulation centers.
Handwriting training in Parkinson’s disease: A trade-off between size, speed and fluency
Broeder, Sanne; Pereira, Marcelo P.; Swinnen, Stephan P.; Vandenberghe, Wim; Nieuwboer, Alice; Heremans, Elke
2017-01-01
Background In previous work, we found that intensive amplitude training successfully improved micrographia in Parkinson’s disease (PD). Handwriting abnormalities in PD also express themselves in stroke duration and writing fluency. It is currently unknown whether training changes these dysgraphic features. Objective To determine the differential effects of amplitude training on various hallmarks of handwriting abnormalities in PD. Methods We randomized 38 right-handed subjects in early to mid-stage of PD into an experimental group (n = 18), receiving training focused at improving writing size during 30 minutes/day, five days/week for six weeks, and a placebo group (n = 20), receiving stretch and relaxation exercises at equal intensity. Writing skills were assessed using a touch-sensitive tablet pre- and post-training, and after a six-week retention period. Tests encompassed a transfer task, evaluating trained and untrained sequences, and an automatization task, comparing single- and dual-task handwriting. Outcome parameters were stroke duration (s), writing velocity (cm/s) and normalized jerk (i.e. fluency). Results In contrast to the reported positive effects of training on writing size, the current results showed increases in stroke duration and normalized jerk after amplitude training, which were absent in the placebo group. These increases remained after the six-week retention period. In contrast, velocity remained unchanged throughout the study. Conclusion While intensive amplitude training is beneficial to improve writing size in PD, it comes at a cost as fluency and stroke duration deteriorated after training. The findings imply that PD patients can redistribute movement priorities after training within a compromised motor system. PMID:29272301
Handwriting training in Parkinson's disease: A trade-off between size, speed and fluency.
Nackaerts, Evelien; Broeder, Sanne; Pereira, Marcelo P; Swinnen, Stephan P; Vandenberghe, Wim; Nieuwboer, Alice; Heremans, Elke
2017-01-01
In previous work, we found that intensive amplitude training successfully improved micrographia in Parkinson's disease (PD). Handwriting abnormalities in PD also express themselves in stroke duration and writing fluency. It is currently unknown whether training changes these dysgraphic features. To determine the differential effects of amplitude training on various hallmarks of handwriting abnormalities in PD. We randomized 38 right-handed subjects in early to mid-stage of PD into an experimental group (n = 18), receiving training focused at improving writing size during 30 minutes/day, five days/week for six weeks, and a placebo group (n = 20), receiving stretch and relaxation exercises at equal intensity. Writing skills were assessed using a touch-sensitive tablet pre- and post-training, and after a six-week retention period. Tests encompassed a transfer task, evaluating trained and untrained sequences, and an automatization task, comparing single- and dual-task handwriting. Outcome parameters were stroke duration (s), writing velocity (cm/s) and normalized jerk (i.e. fluency). In contrast to the reported positive effects of training on writing size, the current results showed increases in stroke duration and normalized jerk after amplitude training, which were absent in the placebo group. These increases remained after the six-week retention period. In contrast, velocity remained unchanged throughout the study. While intensive amplitude training is beneficial to improve writing size in PD, it comes at a cost as fluency and stroke duration deteriorated after training. The findings imply that PD patients can redistribute movement priorities after training within a compromised motor system.
Sex Comparison of Knee Extensor Size, Strength and Fatigue Adaptation to Sprint Interval Training.
Bagley, Liam; Al-Shanti, Nasser; Bradburn, Steven; Baig, Osamah; Slevin, Mark; McPhee, Jamie S
2018-03-12
Regular sprint interval training (SIT) improves whole-body aerobic capacity and muscle oxidative potential, but very little is known about knee extensor anabolic or fatigue resistance adaptations, or whether effects are similar for males and females. The purpose of this study was to compare sex-related differences in knee extensor size, torque-velocity relationship and fatigability adaptations to 12 weeks SIT. Sixteen males and fifteen females (mean (SEM) age: 41 (±2.5) yrs) completed measurements of total body composition assessed by DXA, quadriceps muscle cross-sectional area (CSAQ) assessed by MRI, the knee extensor torque-velocity relationship (covering 0 - 240°·sec) and fatigue resistance, which was measured as the decline in torque from the first to the last of 60 repeated concentric knee extensions performed at 180°·sec. SIT consisted of 4 x 20 second sprints on a cycle ergometer set at an initial power output of 175% of power at VO2max, three times per week for 12 weeks. CSAQ increased by 5% (p=0.023) and fatigue resistance improved 4.8% (p=0.048), with no sex differences in these adaptations (sex comparisons: p=0.140 and p=0.282, respectively). Knee extensor isometric and concentric torque was unaffected by SIT in both males and females (p>0.05 for all velocities). 12 weeks SIT, totalling 4 minutes very intense cycling per week, significantly increased fatigue resistance and CSAQ similarly in males and females, but did not significantly increase torque in males or females. These results suggest that SIT is a time-effective training modality for males and females to increase leg muscle size and fatigue resistance.
Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation
2010-01-01
classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses
1993-06-01
IFF subsystem in size, weight, cabling requirements, and provides the same audio feedback to the gunner. Mie Training Set Guided Missile was the...brought to the HIS site by an instructor who in no way interfered with the test or coached them during the test. The Southwest Asia veterans were brought...and experience group [F((, 14) = 8.04, R<.05]. Mie High Experience Group had a higher kill rate in MrPPO than in MDPP4, whereas this was reversed for
Influence of mono-axis random vibration on reading activity.
Bhiwapurkar, M K; Saran, V H; Harsha, S P; Goel, V K; Berg, Mats
2010-01-01
Recent studies on train passengers' activities found that many passengers were engaged in some form of work, e.g., reading and writing, while traveling by train. A majority of the passengers reported that their activities were disturbed by vibrations or motions during traveling. A laboratory study was therefore set up to study how low-frequency random vibrations influence the difficulty to read. The study involved 18 healthy male subjects of 23 to 32 yr of age group. Random vibrations were applied in the frequency range (1-10 Hz) at 0.5, 1.0 and 1.5 m/s(2) rms amplitude along three directions (longitudinal, lateral and vertical). The effect of vibration on reading activity was investigated by giving a word chain in two different font types (Times New Roman and Arial) and three different sizes (10, 12 and 14 points) of font for each type. Subjects performed reading tasks under two sitting positions (with backrest support and leaning over a table). The judgments of perceived difficulty to read were rated using 7-point discomfort judging scale. The result shows that reading difficulty increases with increasing vibration magnitudes and found to be maximum in longitudinal direction, but with leaning over a table position. In comparison with Times New Roman type and sizes of font, subjects perceived less difficulty with Arial type for all font sizes under all vibration magnitude.
Smartphone-Based System for Learning and Inferring Hearing Aid Settings.
Aldaz, Gabriel; Puria, Sunil; Leifer, Larry J
2016-10-01
Previous research has shown that hearing aid wearers can successfully self-train their instruments' gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the "untrained system," that is, the manufacturer's algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The "trained system" first learned each individual's preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. An experimental within-participants study. Participants used a prototype hearing system-comprising two hearing aids, Android smartphone, and body-worn gateway device-for ∼6 weeks. Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Participants were fitted and instructed to perform daily comparisons of settings ("listening evaluations") through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone-including environmental sound classification, sound level, and location-to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system ("trained settings") to those suggested by the hearing aids' untrained system ("untrained settings"). We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone. American Academy of Audiology
How well does multiple OCR error correction generalize?
NASA Astrophysics Data System (ADS)
Lund, William B.; Ringger, Eric K.; Walker, Daniel D.
2013-12-01
As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.
The Size and Scope of Collegiate Athletic Training Facilities and Staffing.
Gallucci, Andrew R; Petersen, Jeffrey C
2017-08-01
Athletic training facilities have been described in terms of general design concepts and from operational perspectives. However, the size and scope of athletic training facilities, along with staffing at different levels of intercollegiate competition, have not been quantified. To define the size and scope of athletic training facilities and staffing levels at various levels of intercollegiate competition. To determine if differences existed in facilities (eg, number of facilities, size of facilities) and staffing (eg, full time, part time) based on the level of intercollegiate competition. Cross-sectional study. Web-based survey. Athletic trainers (ATs) who were knowledgeable about the size and scope of athletic training programs. Athletic training facility size in square footage; the AT's overall facility satisfaction; athletic training facility component spaces, including satellite facilities, game-day facilities, offices, and storage areas; and staffing levels, including full-time ATs, part-time ATs, and undergraduate students. The survey was completed by 478 ATs (response rate = 38.7%) from all levels of competition. Sample means for facilities were 3124.7 ± 4425 ft 2 (290.3 ± 411 m 2 ) for the central athletic training facility, 1013 ± 1521 ft 2 (94 ± 141 m 2 ) for satellite athletic training facilities, 1272 ± 1334 ft 2 (118 ± 124 m 2 ) for game-day athletic training facilities, 388 ± 575 ft 2 (36 ± 53 m 2 ) for athletic training offices, and 424 ± 884 ft 2 (39 ± 82 m 2 ) for storage space. Sample staffing means were 3.8 ± 2.5 full-time ATs, 1.6 ± 2.5 part-time ATs, 25 ± 17.6 athletic training students, and 6.8 ± 7.2 work-study students. Division I schools had greater resources in multiple categories (P < .001). Differences among other levels of competition were not as well defined. Expansion or renovation of facilities in recent years was common, and almost half of ATs reported that upgrades have been approved for the near future. This study provides benchmark descriptive data on athletic training staffing and facilities. The results (1) suggest that the ATs were satisfied with their facilities and (2) highlight the differences in resources among competition levels.
Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E
2013-01-01
Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].
Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach
NASA Astrophysics Data System (ADS)
Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.
2014-12-01
Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.
Dörrenbächer, Sandra; Müller, Philipp M.; Tröger, Johannes; Kray, Jutta
2014-01-01
Although motivational reinforcers are often used to enhance the attractiveness of trainings of cognitive control in children, little is known about how such motivational manipulations of the setting contribute to separate gains in motivation and cognitive-control performance. Here we provide a framework for systematically investigating the impact of a motivational video-game setting on the training motivation, the task performance, and the transfer success in a task-switching training in middle-aged children (8–11 years of age). We manipulated both the type of training (low-demanding/single-task training vs. high-demanding/task-switching training) as well as the motivational setting (low-motivational/without video-game elements vs. high-motivational/with video-game elements) separately from another. The results indicated that the addition of game elements to a training setting enhanced the intrinsic interest in task practice, independently of the cognitive demands placed by the training type. In the task-switching group, the high-motivational training setting led to an additional enhancement of task and switching performance during the training phase right from the outset. These motivation-induced benefits projected onto the switching performance in a switching situation different from the trained one (near-transfer measurement). However, in structurally dissimilar cognitive tasks (far-transfer measurement), the motivational gains only transferred to the response dynamics (speed of processing). Hence, the motivational setting clearly had a positive impact on the training motivation and on the paradigm-specific task-switching abilities; it did not, however, consistently generalize on broad cognitive processes. These findings shed new light on the conflation of motivation and cognition in childhood and may help to refine guidelines for designing adequate training interventions. PMID:25431564
Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.
Nath, Abhigyan; Subbiah, Karthikeyan
2015-12-01
Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance than that of individual base classifiers. The performance of the learned models trained on Kmeans preprocessed training set is far better than the randomly generated training sets. The proposed method achieved a sensitivity of 90.6%, specificity of 91.4% and accuracy of 91.0% on the first test set and sensitivity of 92.9%, specificity of 96.2% and accuracy of 94.7% on the second blind test set. These results have established that diversifying training set improves the performance of predictive models through superior generalization ability and balancing the training set improves prediction accuracy. For smaller data sets, unsupervised Kmeans based sampling can be an effective technique to increase generalization than that of the usual random splitting method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Novel maximum-margin training algorithms for supervised neural networks.
Ludwig, Oswaldo; Nunes, Urbano
2010-06-01
This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.
Nelissen, Ellen; Ersdal, Hege; Mduma, Estomih; Evjen-Olsen, Bjørg; Twisk, Jos; Broerse, Jacqueline; van Roosmalen, Jos; Stekelenburg, Jelle
2017-09-11
Postpartum haemorrhage (PPH) is a major cause of maternal mortality. Prevention and adequate treatment are therefore important. However, most births in low-resource settings are not attended by skilled providers, and knowledge and skills of healthcare workers that are available are low. Simulation-based training effectively improves knowledge and simulated skills, but the effectiveness of training on clinical behaviour and patient outcome is not yet fully understood. The aim of this study was to assess the effect of obstetric simulation-based training on the incidence of PPH and clinical performance of basic delivery skills and management of PPH. A prospective educational intervention study was performed in a rural referral hospital in Tanzania. Sixteen research assistants observed all births with a gestational age of more than 28 weeks from May 2011 to June 2013. In March 2012 a half-day obstetric simulation-based training in management of PPH was introduced. Observations before and after training were compared. The main outcome measures were incidence of PPH (500-1000 ml and >1000 ml), use and timing of administration of uterotonic drugs, removal of placenta by controlled cord traction, uterine massage, examination of the placenta, management of PPH (>500 ml), and maternal and neonatal mortality at 24 h. Three thousand six hundred twenty two births before and 5824 births after intervention were included. The incidence of PPH (500-1000 ml) significantly reduced from 2.1% to 1.3% after training (effect size Cohen's d = 0.07). The proportion of women that received oxytocin (87.8%), removal of placenta by controlled cord traction (96.5%), and uterine massage after birth (93.0%) significantly increased after training (to 91.7%, 98.8%, 99.0% respectively). The proportion of women who received oxytocin as part of management of PPH increased significantly (before training 43.0%, after training 61.2%). Other skills in management of PPH improved (uterine massage, examination of birth canal, bimanual uterine compression), but these were not statistically significant. The introduction of obstetric simulation-based training was associated with a 38% reduction in incidence of PPH and improved clinical performance of basic delivery skills and management of PPH.
The UXO Classification Demonstration at San Luis Obispo, CA
2010-09-01
Set ................................45 2.17.2 Active Learning Training and Test Set ..........................................47 2.17.3 Extended...optimized algorithm by applying it to only the unlabeled data in the test set. 2.17.2 Active Learning Training and Test Set SIG also used active ... learning [12]. Active learning , an alternative approach for constructing a training set, is used in conjunction with either supervised or semi
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
Barclift, Songhai C; Brown, Elizabeth J; Finnegan, Sean C; Cohen, Elena R; Klink, Kathleen
2016-05-01
Background The Teaching Health Center Graduate Medical Education (THCGME) program is an Affordable Care Act funding initiative designed to expand primary care residency training in community-based ambulatory settings. Statute suggests, but does not require, training in underserved settings. Residents who train in underserved settings are more likely to go on to practice in similar settings, and graduates more often than not practice near where they have trained. Objective The objective of this study was to describe and quantify federally designated clinical continuity training sites of the THCGME program. Methods Geographic locations of the training sites were collected and characterized as Health Professional Shortage Area, Medically Underserved Area, Population, or rural areas, and were compared with the distribution of Centers for Medicare and Medicaid Services (CMS)-funded training positions. Results More than half of the teaching health centers (57%) are located in states that are in the 4 quintiles with the lowest CMS-funded resident-to-population ratio. Of the 109 training sites identified, more than 70% are located in federally designated high-need areas. Conclusions The THCGME program is a model that funds residency training in community-based ambulatory settings. Statute suggests, but does not explicitly require, that training take place in underserved settings. Because the majority of the 109 clinical training sites of the 60 funded programs in 2014-2015 are located in federally designated underserved locations, the THCGME program deserves further study as a model to improve primary care distribution into high-need communities.
Shoepe, Todd C; Ramirez, David A; Almstedt, Hawley C
2010-01-01
Elastic bands added to traditional free-weight techniques have become a part of suggested training routines in recent years. Because of the variable loading patterns of elastic bands (i.e., greater stretch produces greater resistance), it is necessary to quantify the exact loading patterns of bands to identify the volume and intensity of training. The purpose of this study was to determine the length vs. tension properties of multiple sizes of a set of commonly used elastic bands to quantify the resistance that would be applied to free-weight plus elastic bench presses (BP) and squats (SQ). Five elastic bands of varying thickness were affixed to an overhead support beam. Dumbbells of varying weights were progressively added to the free end while the linear deformation was recorded with each subsequent weight increment. The resistance was plotted as a factor of linear deformation, and best-fit nonlinear logarithmic regression equations were then matched to the data. For both the BP and SQ loading conditions and all band thicknesses tested, R values were greater than 0.9623. These data suggest that differences in load exist as a result of the thickness of the elastic band, attachment technique, and type of exercise being performed. Facilities should adopt their own form of loading quantification to match their unique set of circumstances when acquiring, researching, and implementing elastic band and free-weight exercises into the training programs.
Gene function prediction based on the Gene Ontology hierarchical structure.
Cheng, Liangxi; Lin, Hongfei; Hu, Yuncui; Wang, Jian; Yang, Zhihao
2014-01-01
The information of the Gene Ontology annotation is helpful in the explanation of life science phenomena, and can provide great support for the research of the biomedical field. The use of the Gene Ontology is gradually affecting the way people store and understand bioinformatic data. To facilitate the prediction of gene functions with the aid of text mining methods and existing resources, we transform it into a multi-label top-down classification problem and develop a method that uses the hierarchical relationships in the Gene Ontology structure to relieve the quantitative imbalance of positive and negative training samples. Meanwhile the method enhances the discriminating ability of classifiers by retaining and highlighting the key training samples. Additionally, the top-down classifier based on a tree structure takes the relationship of target classes into consideration and thus solves the incompatibility between the classification results and the Gene Ontology structure. Our experiment on the Gene Ontology annotation corpus achieves an F-value performance of 50.7% (precision: 52.7% recall: 48.9%). The experimental results demonstrate that when the size of training set is small, it can be expanded via topological propagation of associated documents between the parent and child nodes in the tree structure. The top-down classification model applies to the set of texts in an ontology structure or with a hierarchical relationship.
Van Rie, A; Fitzgerald, D; Kabuya, G; Van Deun, A; Tabala, M; Jarret, N; Behets, F; Bahati, E
2008-03-01
Sputum smear microscopy is the main and often only laboratory technique used for the diagnosis of tuberculosis in resource-poor countries, making quality assurance (QA) of smear microscopy an important activity. We evaluated the effects of a 5-day refresher training course for laboratory technicians and the distribution of new microscopes on the quality of smear microscopy in 13 primary health care laboratories in Kinshasa, Democratic Republic of Congo. The 2002 external QA guidelines for acid-fast bacillus smear microscopy were implemented, and blinded rechecking of the slides was performed before and 9 months after the training course and microscope distribution. We observed that the on-site checklist was highly time-consuming but could be tailored to capture frequent problems. Random blinded rechecking by the lot QA system method decreased the number of slides to be reviewed. Most laboratories needed further investigation for possible unacceptable performance, even according to the least-stringent interpretation. We conclude that the 2002 external QA guidelines are feasible for implementation in resource-poor settings, that the efficiency of external QA can be increased by selecting sample size parameters and interpretation criteria that take into account the local working conditions, and that greater attention should be paid to the provision of timely feedback and correction of the causes of substandard performance at poorly performing laboratories.
Comparative analysis of semantic localization accuracies between adult and pediatric DICOM CT images
NASA Astrophysics Data System (ADS)
Robertson, Duncan; Pathak, Sayan D.; Criminisi, Antonio; White, Steve; Haynor, David; Chen, Oliver; Siddiqui, Khan
2012-02-01
Existing literature describes a variety of techniques for semantic annotation of DICOM CT images, i.e. the automatic detection and localization of anatomical structures. Semantic annotation facilitates enhanced image navigation, linkage of DICOM image content and non-image clinical data, content-based image retrieval, and image registration. A key challenge for semantic annotation algorithms is inter-patient variability. However, while the algorithms described in published literature have been shown to cope adequately with the variability in test sets comprising adult CT scans, the problem presented by the even greater variability in pediatric anatomy has received very little attention. Most existing semantic annotation algorithms can only be extended to work on scans of both adult and pediatric patients by adapting parameters heuristically in light of patient size. In contrast, our approach, which uses random regression forests ('RRF'), learns an implicit model of scale variation automatically using training data. In consequence, anatomical structures can be localized accurately in both adult and pediatric CT studies without the need for parameter adaptation or additional information about patient scale. We show how the RRF algorithm is able to learn scale invariance from a combined training set containing a mixture of pediatric and adult scans. Resulting localization accuracy for both adult and pediatric data remains comparable with that obtained using RRFs trained and tested using only adult data.
Recognition Using Hybrid Classifiers.
Osadchy, Margarita; Keren, Daniel; Raviv, Dolev
2016-04-01
A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.
Competitive STDP Learning of Overlapping Spatial Patterns.
Krunglevicius, Dalius
2015-08-01
Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.
Extracting physicochemical features to predict protein secondary structure.
Huang, Yin-Fu; Chen, Shu-Ying
2013-01-01
We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances.
Extracting Physicochemical Features to Predict Protein Secondary Structure
Chen, Shu-Ying
2013-01-01
We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances. PMID:23766688
A topical haemoglobin spray for oxygenating pressure ulcers: a pilot study.
Tickle, Joy
2015-03-01
The effect of pressure ulcers on patient quality of life have been recognised as a real problem for many years, and the need for robust and effective management of pressure ulcers is now a prominent national health-care issue. Myriad different interventions exist for the treatment of pressure ulcers, including clinically effective dressings and pressure-relieving devices, yet many pressure ulcers still do not heal and often become a chronic wound. This is the second of a series of articles (Norris, 2014) discussing the clinical evaluation of a topical oxygen therapy in practice. It describes a small evaluation involving 18 patients with pressure ulcers. The study set out to determine the effect of a topical oxygen therapy on wound size. The therapy comprises a canister that sprays pure haemoglobin in a water solution into or onto the wound. The haemoglobin spray needs to be used at least once every 3 days, does not require training on its use and can be used in any care setting. Overall, results identified wound healing progression in all 18 wounds and wound size reduction in 17 of the 18 wounds.
Ng, Kenney; Steinhubl, Steven R; deFilippi, Christopher; Dey, Sanjoy; Stewart, Walter F
2016-11-01
Using electronic health records data to predict events and onset of diseases is increasingly common. Relatively little is known, although, about the tradeoffs between data requirements and model utility. We examined the performance of machine learning models trained to detect prediagnostic heart failure in primary care patients using longitudinal electronic health records data. Model performance was assessed in relation to data requirements defined by the prediction window length (time before clinical diagnosis), the observation window length (duration of observation before prediction window), the number of different data domains (data diversity), the number of patient records in the training data set (data quantity), and the density of patient encounters (data density). A total of 1684 incident heart failure cases and 13 525 sex, age-category, and clinic matched controls were used for modeling. Model performance improved as (1) the prediction window length decreases, especially when <2 years; (2) the observation window length increases but then levels off after 2 years; (3) the training data set size increases but then levels off after 4000 patients; (4) more diverse data types are used, but, in order, the combination of diagnosis, medication order, and hospitalization data was most important; and (5) data were confined to patients who had ≥10 phone or face-to-face encounters in 2 years. These empirical findings suggest possible guidelines for the minimum amount and type of data needed to train effective disease onset predictive models using longitudinal electronic health records data. © 2016 American Heart Association, Inc.
Deep learning for galaxy surface brightness profile fitting
NASA Astrophysics Data System (ADS)
Tuccillo, D.; Huertas-Company, M.; Decencière, E.; Velasco-Forero, S.; Domínguez Sánchez, H.; Dimauro, P.
2018-03-01
Numerous ongoing and future large area surveys (e.g. Dark Energy Survey, EUCLID, Large Synoptic Survey Telescope, Wide Field Infrared Survey Telescope) will increase by several orders of magnitude the volume of data that can be exploited for galaxy morphology studies. The full potential of these surveys can be unlocked only with the development of automated, fast, and reliable analysis methods. In this paper, we present DeepLeGATo, a new method for 2-D photometric galaxy profile modelling, based on convolutional neural networks. Our code is trained and validated on analytic profiles (HST/CANDELS F160W filter) and it is able to retrieve the full set of parameters of one-component Sérsic models: total magnitude, effective radius, Sérsic index, and axis ratio. We show detailed comparisons between our code and GALFIT. On simulated data, our method is more accurate than GALFIT and ˜3000 time faster on GPU (˜50 times when running on the same CPU). On real data, DeepLeGATo trained on simulations behaves similarly to GALFIT on isolated galaxies. With a fast domain adaptation step made with the 0.1-0.8 per cent the size of the training set, our code is easily capable to reproduce the results obtained with GALFIT even on crowded regions. DeepLeGATo does not require any human intervention beyond the training step, rendering it much automated than traditional profiling methods. The development of this method for more complex models (two-component galaxies, variable point spread function, dense sky regions) could constitute a fundamental tool in the era of big data in astronomy.
Maroto-Izquierdo, Sergio; García-López, David; de Paz, José A
2017-12-01
The aim of the study was to analyse the effects of 6 week (15 sessions) flywheel resistance training with eccentric-overload (FRTEO) on different functional and anatomical variables in professional handball players. Twenty-nine athletes were recruited and randomly divided into two groups. The experimental group (EXP, n = 15) carried out 15 sessions of FRTEO in the leg-press exercise, with 4 sets of 7 repetitions at a maximum-concentric effort. The control group (CON, n = 14) performed the same number of training sessions including 4 sets of 7 maximum repetitions (7RM) using a weight-stack leg-press machine. The results which were measured included maximal dynamic strength (1RM), muscle power at different submaximal loads (PO), vertical jump height (CMJ and SJ), 20 m sprint time (20 m), T-test time (T-test), and Vastus-Lateralis muscle (VL) thickness. The results of the EXP group showed a substantially better improvement (p < 0.05-0.001) in PO, CMJ, 20 m, T-test and VL, compared to the CON group. Moreover, athletes from the EXP group showed significant improvements concerning all the variables measured: 1RM (ES = 0.72), PO (ES = 0.42 - 0.83), CMJ (ES = 0.61), SJ (ES = 0.54), 20 m (ES = 1.45), T-test (ES = 1.44), and VL (ES = 0.63 - 1.64). Since handball requires repeated short, explosive effort such as accelerations and decelerations during sprints with changes of direction, these results suggest that FRTEO affects functional and anatomical changes in a way which improves performance in well-trained professional handball players.
Park, Bo Youn; Kim, Sujin; Cho, Yang Seok
2018-02-01
The congruency effect of a task-irrelevant distractor has been found to be modulated by task-relevant set size and display set size. The present study used a psychological refractory period (PRP) paradigm to examine the cognitive loci of the display set size effect (dilution effect) and the task-relevant set size effect (perceptual load effect) on distractor interference. A tone discrimination task (Task 1), in which a response was made to the pitch of the target tone, was followed by a letter discrimination task (Task 2) in which different types of visual target display were used. In Experiment 1, in which display set size was manipulated to examine the nature of the display set size effect on distractor interference in Task 2, the modulation of the congruency effect by display set size was observed at both short and long stimulus-onset asynchronies (SOAs), indicating that the display set size effect occurred after the target was selected for processing in the focused attention stage. In Experiment 2, in which task-relevant set size was manipulated to examine the nature of the task-relevant set size effect on distractor interference in Task 2, the effects of task-relevant set size increased with SOA, suggesting that the target selection efficiency in the preattentive stage was impaired with increasing task-relevant set size. These results suggest that display set size and task-relevant set size modulate distractor processing in different ways.
Guay, Stéphane; Goncalves, Jane; Boyer, Richard
2016-08-01
Workplace violence can lead to serious consequences for victims, organizations, and society. Most workplace violence prevention programs aim to train staff to better recognize and safely manage at-risk situations. The Omega education and training program was developed in Canada in 1999, and has since been used to teach healthcare and mental health workers the skills needed to effectively intervene in situations of aggression. The present study was designed to assess the impact of Omega on employee psychological distress, confidence in coping, and perceived exposure to violence. This program was offered to 105 employees in a psychiatric hospital in Montreal, Canada. Eighty-nine of them accepted to participate. Questionnaires were completed before the training, after a short period of time (M = 109 days) and at follow-up (M = 441 days). Repeated-measures ANOVAs and Cohen's d effect sizes were calculated. Results demonstrated statistically significant improvements in short-term and follow-up posttest scores of psychological distress, confidence in coping, and in levels of exposure to violence. This study is one of very few to demonstrate the positive impact of this training program. Further research is needed to understand how to improve the effectiveness of the program, especially among participants resistant to change.
Liberman, Keliane; Forti, Louis N; Beyer, Ingo; Bautmans, Ivan
2017-01-01
This systematic review reports the most recent literature regarding the effects of physical exercise on muscle strength, body composition, physical functioning and inflammation in older adults. All articles were assessed for methodological quality and where possible effect size was calculated. Thirty-four articles were included - four involving frail, 24 healthy and five older adults with a specific disease. One reported on both frail and nonfrail patients. Several types of exercise were used: resistance training, aerobic training, combined resistance training and aerobic training and others. In frail older persons, moderate-to-large beneficial exercise effects were noted on inflammation, muscle strength and physical functioning. In healthy older persons, effects of resistance training (most frequently investigated) on inflammation or muscle strength can be influenced by the exercise modalities (intensity and rest interval between sets). Muscle strength seemed the most frequently used outcome measure, with moderate-to-large effects obtained regardless the exercise intervention studied. Similar effects were found in patients with specific diseases. Exercise has moderate-to-large effects on muscle strength, body composition, physical functioning and inflammation in older adults. Future studies should focus on the influence of specific exercise modalities and target the frail population more.
Guay, Stéphane; Goncalves, Jane; Boyer, Richard
2016-01-01
Workplace violence can lead to serious consequences for victims, organizations, and society. Most workplace violence prevention programs aim to train staff to better recognize and safely manage at-risk situations. The Omega education and training program was developed in Canada in 1999, and has since been used to teach healthcare and mental health workers the skills needed to effectively intervene in situations of aggression. The present study was designed to assess the impact of Omega on employee psychological distress, confidence in coping, and perceived exposure to violence. This program was offered to 105 employees in a psychiatric hospital in Montreal, Canada. Eighty-nine of them accepted to participate. Questionnaires were completed before the training, after a short period of time (M = 109 days) and at follow-up (M = 441 days). Repeated-measures ANOVAs and Cohen’s d effect sizes were calculated. Results demonstrated statistically significant improvements in short-term and follow-up posttest scores of psychological distress, confidence in coping, and in levels of exposure to violence. This study is one of very few to demonstrate the positive impact of this training program. Further research is needed to understand how to improve the effectiveness of the program, especially among participants resistant to change. PMID:27490582
Writing and reading training effects on font type and size preferences by students with low vision.
Atasavun Uysal, Songül; Düger, Tülin
2012-06-01
The effect of writing and reading training on preferred font type and size in low-vision students was evaluated in 35 children. An ophthalmologist confirmed low vision according to ICD-10-CM. Children identified the font type and size they could best read. The writing subtest of the Jebsen-Taylor Hand Function Test, read in 1 min., and legibility as measured by the number of readable written letters were used in evaluating the children. A writing and reading treatment program was conducted, beginning with the child's preferred font type and size, for 3 months, 2 days per week, for 45 min. per day at the child's school. Before treatment, the most preferred font type was Verdana; after treatment, the preferred font type and size changed. Students had gained reading and writing speed after training, but their writing legibility was not significantly better. Training might affect the preferred font type and size of students with low vision. Surprisingly, serif and sans-serif fonts were preferred about equally after treatment.
Effect of Sling Exercise Training on Balance in Patients with Stroke: A Meta-Analysis
Peng, Qiyuan; Chen, Jingjie; Zou, Yucong; Liu, Gang
2016-01-01
Objective This study aims to evaluate the effect of sling exercise training (SET) on balance in patients with stroke. Methods PubMed, Cochrane Library, Ovid LWW, CBM, CNKI, WanFang, and VIP databases were searched for randomized controlled trials of the effect of SET on balance in patients with stroke. The study design and participants were subjected to metrological analysis. Berg balance Scale (BBS), Barthel index score (BI), and Fugl-Meyer Assessment (FMA) were used as independent parameters for evaluating balance function, activities of daily living(ADL) and motor function after stroke respectively, and were subjected to meta-analysis by RevMan5.3 software. Results Nine studies with 460 participants were analyzed. Results of meta-analysis showed that the SET treatment combined with conventional rehabilitation was superior to conventional rehabilitation treatments, with increased degrees of BBS (WMD = 3.81, 95% CI [0.15, 7.48], P = 0.04), BI (WMD = 12.98, 95% CI [8.39, 17.56], P < 0.00001), and FMA (SMD = 0.76, 95% CI [0.41, 1.11], P < 0.0001). Conclusion Based on limited evidence from 9 trials, the SET treatment combined with conventional rehabilitation was superior to conventional rehabilitation treatments, with increased degrees of BBS, BI and FMA, So the SET treatment can improvement of balance function after stroke, but the interpretation of our findings is required to be made with caution due to limitations in included trials such as small sample sizes and the risk of bias. Therefore, more multi-center and large-sampled randomized controlled trials are needed to confirm its clinical applications. PMID:27727288
Aerodynamic drag on intermodal railcars
NASA Astrophysics Data System (ADS)
Kinghorn, Philip; Maynes, Daniel
2014-11-01
The aerodynamic drag associated with transport of commodities by rail is becoming increasingly important as the cost of diesel fuel increases. This study aims to increase the efficiency of intermodal cargo trains by reducing the aerodynamic drag on the load carrying cars. For intermodal railcars a significant amount of aerodynamic drag is a result of the large distance between loads that often occurs and the resulting pressure drag resulting from the separated flow. In the present study aerodynamic drag data have been obtained through wind tunnel testing on 1/29 scale models to understand the savings that may be realized by judicious modification to the size of the intermodal containers. The experiments were performed in the BYU low speed wind tunnel and the test track utilizes two leading locomotives followed by a set of five articulated well cars with double stacked containers. The drag on a representative mid-train car is measured using an isolated load cell balance and the wind tunnel speed is varied from 20 to 100 mph. We characterize the effect that the gap distance between the containers and the container size has on the aerodynamic drag of this representative rail car and investigate methods to reduce the gap distance.
Exercise-training intervention studies in competitive swimming.
Aspenes, Stian Thoresen; Karlsen, Trine
2012-06-01
Competitive swimming has a long history and is currently one of the largest Olympic sports, with 16 pool events. Several aspects separate swimming from most other sports such as (i) the prone position; (ii) simultaneous use of arms and legs for propulsion; (iii) water immersion (i.e. hydrostatic pressure on thorax and controlled respiration); (iv) propulsive forces that are applied against a fluctuant element; and (v) minimal influence of equipment on performance. Competitive swimmers are suggested to have specific anthropometrical features compared with other athletes, but are nevertheless dependent on physiological adaptations to enhance their performance. Swimmers thus engage in large volumes of training in the pool and on dry land. Strength training of various forms is widely used, and the energetic systems are addressed by aerobic and anaerobic swimming training. The aim of the current review was to report results from controlled exercise training trials within competitive swimming. From a structured literature search we found 17 controlled intervention studies that covered strength or resistance training, assisted sprint swimming, arms-only training, leg-kick training, respiratory muscle training, training the energy delivery systems and combined interventions across the aforementioned categories. Nine of the included studies were randomized controlled trials. Among the included studies we found indications that heavy strength training on dry land (one to five repetitions maximum with pull-downs for three sets with maximal effort in the concentric phase) or sprint swimming with resistance towards propulsion (maximal pushing with the arms against fixed points or pulling a perforated bowl) may be efficient for enhanced performance, and may also possibly have positive effects on stroke mechanics. The largest effect size (ES) on swimming performance was found in 50 m freestyle after a dry-land strength training regimen of maximum six repetitions across three sets in relevant muscle-groups (ES 1.05), and after a regimen of resisted- and assisted-sprint training with elastic surgical tubes (ES 1.21). Secondly, several studies suggest that high training volumes do not pose any immediate advantage over lower volumes (with higher intensity) for swim performance. Overall, very few studies were eligible for the current review although the search strategy was broad and fairly liberal. The included studies predominantly involved freestyle swimming and, overall, there seems to be more questions than answers within intervention-based competitive swimming research. We believe that this review may encourage other researchers to pursue the interesting topics within the physiology of competitive swimming.
Carotid-cardiac baroreflex response and LBNP tolerance following resistance training
NASA Technical Reports Server (NTRS)
Tatro, D. L.; Dudley, G. A.; Convertino, V. A.
1992-01-01
The purpose of this study was to examine the effect of lower body resistance training on cardiovascular control mechanisms and blood pressure maintenance during an orthostatic challenge. Lower body negative pressure (LBNP) tolerance, carotid-cardiac baroreflex function (using neck chamber pressure), and calf compliance were measured in eight healthy males before and after 19 wk of knee extension and leg press training. Resistance training sessions consisted of four or five sets of 6-12 repetitions of each exercise, performed two times per week. Training increased strength 25 +/- 3 (SE) percent (P = 0.0003) and 31 +/- 6 percent (P = 0.0004), respectively, for the leg press and knee extension exercises. Average fiber size in biopsy samples of m. vastus lateralis increased 21 +/- 5 percent (P = 0.0014). Resistance training had no significant effect on LBNP tolerance. However, calf compliance decreased in five of the seven subjects measured, with the group average changing from 4.4 +/- 0.6 ml.mm Hg-1 to 3.9 +/- 0.3 ml.mm Hg-1 (P = 0.3826). The stimulus-response relationship of the carotid-cardiac baroreflex response shifted to the left on the carotid pressure axis as indicated by a reduction of 6 mm Hg in baseline systolic blood pressure (P = 0.0471). In addition, maximum slope increased from 5.4 +/- 1.3 ms.mm Hg-1 before training to 6.6 +/- 1.6 ms.mm Hg-1 after training (P = 0.0141). Our results suggest the possibility that high resistance, lower extremity exercise training can cause a chronic increase in sensitivity and resetting of the carotid-cardiac baroreflex.
Potteiger, Kelly; Pitney, William A; Cappaert, Thomas A; Wolfe, Angela
2017-12-01
Environmental sustainability is a critical concern in health care. Similar to other professions, the practice of athletic training necessitates the use of a large quantity of natural and manufactured resources. To examine the perceptions of the waste produced by the practice of athletic training and the green practices currently used by athletic trainers (ATs) to combat this waste. Mixed-methods study. Field setting. A total of 442 ATs completed the study. Sixteen individuals participated in the qualitative portion. Data from sections 2 and 3 of the Athletic Training Environmental Impact Survey were analyzed. Focus groups and individual interviews were used to determine participants' views of waste and the efforts used to combat waste. Descriptive statistics were used to examine types of waste. Independent t tests, χ 2 tests, and 1-way analyses of variance were calculated to identify any differences between the knowledge and use of green techniques. Interviews and focus groups were transcribed verbatim and analyzed inductively. Participants reported moderate knowledge of green techniques (3.18 ± 0.53 on a 5-point Likert scale). Fifty-eight percent (n = 260) of survey participants perceived that a substantial amount of waste was produced by the practice of athletic training. Ninety-two percent (n = 408) admitted they thought about the waste produced in their daily practice. The types of waste reported most frequently were plastics (n = 111, 29%), water (n = 88, 23%), and paper for administrative use (n = 81, 21%). Fifty-two percent (n = 234) agreed this waste directly affected the environment. The qualitative aspect of the study reinforced recognition of the large amount of waste produced by the practice of athletic training. Types of conservation practices used by ATs were also explored. Participants reported concern regarding the waste produced by athletic training. The amount of waste varies depending on practice size and setting. Future researchers should use direct measures to determine the amount of waste created by the practice of athletic training.
Researching the Size and Scope of Online Usage in the Vocational Education and Training Sector.
ERIC Educational Resources Information Center
Hill, Robyn; Malone, Peter; Markham, Selby; Sharma, Renu; Sheard, Judithe; Young, Graeme
The size and scope of online usage in Australia's vocational education and training sector were examined in a four-stage study that included the numerous data collection activities, including the following: a literature review; interviews with 85 institutes; interviews with 10 training organizations and 20 organizations using online learning;…
Suleiman, Abdulqadir M; Svendsen, Kristin V H
2015-12-01
Goal-oriented communication of risk of hazards is necessary in order to reduce risk of workers' exposure to chemicals. Adequate training of workers and enterprise priority setting are essential elements. Cleaning enterprises have many challenges and the existing paradigms influence the risk levels of these enterprises. Information on organization and enterprises' prioritization in training programs was gathered from cleaning enterprises. A measure of enterprises' conceptual level of importance of chemical health hazards and a model for working out the risk index (RI) indicating enterprises' conceptual risk level was established and used to categorize the enterprises. In 72.3% of cases, training takes place concurrently with task performances and in 67.4% experienced workers conduct the trainings. There is disparity between employers' opinion on competence level of the workers and reality. Lower conceptual level of importance was observed for cleaning enterprises of different sizes compared with regional safety delegates and occupational hygienists. Risk index values show no difference in risk level between small and large enterprises. Training of cleaning workers lacks the prerequisite for suitability and effectiveness to counter risks of chemical health hazards. There is dereliction of duty by management in the sector resulting in a lack of competence among the cleaning workers. Instituting acceptable easily attainable safety competence level for cleaners will conduce to risk reduction, and enforcement of attainment of the competence level would be a positive step.
Abaïdia, Abd-Elbasset; Delecroix, Barthélémy; Leduc, Cédric; Lamblin, Julien; McCall, Alan; Baquet, Georges; Dupont, Grégory
2017-01-01
Abaïdia, A-E, Delecroix, B, Leduc, C, Lamblin, J, McCall, A, Baquet, G, and Dupont, G. Effects of a strength training session after an exercise inducing muscle damage on recovery kinetics. J Strength Cond Res 31(1): 115-125, 2017-The purpose of this study was to investigate the effects of an upper-limb strength training session the day after an exercise inducing muscle damage on recovery of performance. In a randomized crossover design, subjects performed the day after the exercise, on 2 separate occasions (passive vs. active recovery conditions) a single-leg exercise (dominant in one condition and nondominant in the other condition) consisting of 5 sets of 15 eccentric contractions of the knee flexors. Active recovery consisted of performing an upper-body strength training session the day after the exercise. Creatine kinase, hamstring strength, and muscle soreness were assessed immediately and 20, 24, and 48 hours after exercise-induced muscle damage. The upper-body strength session, after muscle-damaging exercise accelerated the recovery of slow concentric force (effect size = 0.65; 90% confidence interval = -0.06 to 1.32), but did not affect the recovery kinetics for the other outcomes. The addition of an upper-body strength training session the day after muscle-damaging activity does not negatively affect the recovery kinetics. Upper-body strength training may be programmed the day after a competition.
STACCATO: a novel solution to supernova photometric classification with biased training sets
NASA Astrophysics Data System (ADS)
Revsbech, E. A.; Trotta, R.; van Dyk, D. A.
2018-01-01
We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. - a diffusion map combined with a random forest classifier - to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.
Kikuchi, Naoki; Yoshida, Shou; Okuyama, Mizuki; Nakazato, Koichi
2016-08-01
Kikuchi, N, Yoshida, S, Okuyama, M, and Nakazato, K. The effect of high-intensity interval cycling sprints subsequent to arm-curl exercise on upper-body muscle strength and hypertrophy. J Strength Cond Res 30(8): 2318-2323, 2016-The purpose of this study was to examine whether lower limb sprint interval training (SIT) after arm resistance training (RT) influences training response of arm muscle strength and hypertrophy. Twenty men participated in this study. We divided subjects into RT group (n = 6) and concurrent training group (CT, n = 6). The RT program was designed to induce muscular hypertrophy (3 sets × 10 repetitions [reps] at 80% 1 repetition maximum [1RM] of arm-curl exercise) and was performed in an 8-week training schedule performed 3 times per week on nonconsecutive days. Subjects assigned to the CT group performed identical protocols as strength training and modified SIT (4 sets of 30-s maximal effort, separated in 4 m 30-s rest intervals) on the same day. Pretest and posttest maximal oxygen consumption (V[Combining Dot Above]O2max), muscle cross-sectional area (CSA), and 1RM were measured. Significant increase in V[Combining Dot Above]O2max from pretest to posttest was observed in the CT group (p = 0.010, effect size [ES] = 1.84) but not in the RT group (p = 0.559, ES = 0.35). Significant increase in CSA from pretest to posttest was observed in the RT group (p = 0.030, ES = 1.49) but not in the CT group (p = 0.110, ES = 1.01). Significant increase in 1RM from pretest to posttest was observed in the RT group (p = 0.021, ES = 1.57) but not in the CT group (p = 0.065, ES = 1.19). In conclusion, our data indicate that concurrent lower limb SIT interferes with arm muscle hypertrophy and strength.
Short Term Motor-Skill Acquisition Improves with Size of Self-Controlled Virtual Hands
Ossmy, Ori; Mukamel, Roy
2017-01-01
Visual feedback in general, and from the body in particular, is known to influence the performance of motor skills in humans. However, it is unclear how the acquisition of motor skills depends on specific visual feedback parameters such as the size of performing effector. Here, 21 healthy subjects physically trained to perform sequences of finger movements with their right hand. Through the use of 3D Virtual Reality devices, visual feedback during training consisted of virtual hands presented on the screen, tracking subject’s hand movements in real time. Importantly, the setup allowed us to manipulate the size of the displayed virtual hands across experimental conditions. We found that performance gains increase with the size of virtual hands. In contrast, when subjects trained by mere observation (i.e., in the absence of physical movement), manipulating the size of the virtual hand did not significantly affect subsequent performance gains. These results demonstrate that when it comes to short-term motor skill learning, the size of visual feedback matters. Furthermore, these results suggest that highest performance gains in individual subjects are achieved when the size of the virtual hand matches their real hand size. These results may have implications for optimizing motor training schemes. PMID:28056023
Fink, Julius; Schoenfeld, Brad J; Kikuchi, Naoki; Nakazato, Koichi
2018-05-01
We investigated the effects of 2 different resistance training (RT) protocols on muscle hypertrophy and strength. The first group (N.=8) performed a single drop set (DS) and the second group (N.=8) performed 3 sets of conventional RT (normal set, NS). Eight young men in each group completed 6 weeks of RT. Muscle hypertrophy was assessed via magnetic resonance imaging (MRI) and strength via 12 repetition maximum tests before and after the 6 weeks. Acute stress markers such as muscle thickness (MT), blood lactate (BL), maximal voluntary contraction (MVC), heart rate (HR) and rating of perceived exertion (RPE) have been measured before and after one bout of RT. Both groups showed significant increases in triceps muscle cross-sectional area (CSA) (10.0±3.7%, effect size (ES) =0.47 for DS and 5.1±2.1%, ES=0.25 for NS). Strength increased in both groups (16.1±12.1%, ES=0.88 for DS and 25.2±17.5%, ES=1.34 for NS). Acute pre/post measurements for one bout of RT showed significant changes in MT (18.3±5.8%, P<0.001) and MVC (-13.3±7.1, P<0.05) in the DS group only and a significant difference (P<0.01) in RPE was observed between groups (7.7±1.5 for DS and 5.3±1.4 for NS). Superior muscle gains might be achieved with a single set of DS compared to 3 sets of conventional RT, probably due to higher stress experienced in the DS protocol.
Physicians' exodus: why medical graduates leave Austria or do not work in clinical practice.
Scharer, Sebastian; Freitag, Andreas
2015-05-01
Austria has the highest number of medical graduates of all Organisation for Economic Co-operation and Development (OECD) countries in relation to its population size, but over 30% choose not to pursue a career as physicians in the country. This article describes under- and postgraduate medical education in Austria and analyses reasons for the exodus of physicians. In Austria, medicine is a 5- or 6-year degree offered at four public and two private medical schools. Medical graduates have to complete training in general medicine or a speciality to attain a licence to practice. While not compulsory for speciality training, board certification in general medicine has often been regarded as a prerequisite for access to speciality training posts. Unstructured postgraduate training curricula, large amounts of administrative tasks, low basic salaries and long working hours present for incentives for medical graduates to move abroad or to work in a non-clinical setting. The scope of current reforms, such as the establishment of a new medical faculty and the implementation of a common trunk, is possibly insufficient in addressing the issue. Extensive reforms regarding occupational conditions and the structure of postgraduate medical education are necessary to avoid a further exodus of junior doctors.
Krasny-Pacini, Agata; Chevignard, Mathilde; Evans, Jonathan
2014-01-01
To determine if Goal Management Training (GMT) is effective for the rehabilitation of executive functions following brain injury when administered alone or in combination with other interventions. Systematic review, with quality appraisal specific to executive functions research and calculation of effect sizes. Twelve studies were included. Four studies were "Proof-of-principle" studies, testing the potential effectiveness of GMT and eight were rehabilitation studies. Effectiveness was greater when GMT was combined with other interventions. The most effective interventions appeared to be those combing GMT with: Problem Solving Therapy; personal goal setting; external cueing or prompting apply GMT to the current task; personal homework to increase patients' commitment and training intensity; ecological and daily life training activities rather than paper-and-pencil, office-type tasks. Level of support for GMT was higher for studies measuring outcome in terms of increases in participation in everyday activities rather than on measures of executive impairment. Comprehensive rehabilitation programs incorporating GMT, but integrating other approaches, are effective in executive function rehabilitation following brain injury in adults. There is insufficient evidence to support use of GMT as a stand-alone intervention.
McMaster, Daniel Travis; Gill, Nicholas; Cronin, John; McGuigan, Michael
2013-05-01
Strength and power are crucial components to excelling in all contact sports; and understanding how a player's strength and power levels fluctuate in response to various resistance training loads is of great interest, as it will inevitably dictate the loading parameters throughout a competitive season. This is a systematic review of training, maintenance and detraining studies, focusing on the development, retention and decay rates of strength and power measures in elite rugby union, rugby league and American football players. A literature search using MEDLINE, EBSCO Host, Google Scholar, IngentaConnect, Ovid LWW, ProQuest Central, ScienceDirect Journals, SPORTDiscus and Wiley InterScience was conducted. References were also identified from other review articles and relevant textbooks. From 300 articles, 27 met the inclusion criteria and were retained for further analysis. STUDY QUALITY: Study quality was assessed via a modified 20-point scale created to evaluate research conducted in athletic-based training environments. The mean ± standard deviation (SD) quality rating of the included studies was 16.2 ± 1.9; the rating system revealed that the quality of future studies can be improved by randomly allocating subjects to training groups, providing greater description and detail of the interventions, and including control groups where possible. Percent change, effect size (ES = [Post-Xmean - Pre-Xmean)/Pre-SD) calculations and SDs were used to assess the magnitude and spread of strength and power changes in the included studies. The studies were grouped according to (1) mean intensity relative volume (IRV = sets × repetitions × intensity; (2) weekly training frequency per muscle group; and (3) detraining duration. IRV is the product of the number of sets, repetitions and intensity performed during a training set and session. The effects of weekly training frequencies were assessed by normalizing the percent change values to represent the weekly changes in strength and power. During the IRV analysis, the percent change values were normalized to represent the percent change per training session. The long-term periodized training effects (12, 24 and 48 months) on strength and power were also investigated. Across the 27 studies (n = 1,015), 234 percent change and 230 ES calculations were performed. IRVs of 11-30 (i.e., 3-6 sets of 4-10 repetitions at 74-88% one-repetition maximum [1RM]) elicited strength and power increases of 0.42% and 0.07% per training session, respectively. The following weekly strength changes were observed for two, three and four training sessions per muscle region/week: 0.9%, 1.8 % and 1.3 %, respectively. Similarly, the weekly power changes for two, three and four training sessions per muscle group/week were 0.1%, 0.3% and 0.7 %, respectively. Mean decreases of 14.5% (ES = -0.64) and 0.4 (ES = -0.10) were observed in strength and power across mean detraining periods of 7.2 ± 5.8 and 7.6 ± 5.1 weeks, respectively. The long-term training studies found strength increases of 7.1 ± 1.0% (ES = 0.55), 8.5 ± 3.3% (ES = 0.81) and 12.5 ± 6.8% (ES = 1.39) over 12, 24 and 48 months, respectively; they also found power increases of 14.6% (ES = 1.30) and 12.2% (ES = 1.06) at 24 and 48 months. Based on current findings, training frequencies of two to four resistance training sessions per muscle group/week can be prescribed to develop upper and lower body strength and power. IRVs ranging from 11 to 30 (i.e., 3-6 sets of 4-10 repetitions of 70-88% 1RM) can be prescribed in a periodized manner to retain power and develop strength in the upper and lower body. Strength levels can be maintained for up to 3 weeks of detraining, but decay rates will increase thereafter (i.e. 5-16 weeks). The effect of explosive-ballistic training and detraining on pure power development and decay in elite rugby and American football players remain inconclusive. The long-term effects of periodized resistance training programmes on strength and power seem to follow the law of diminishing returns, as training exposure increases beyond 12-24 months, adaptation rates are reduced.
Resistance Training: Physiological Responses and Adaptations (Part 2 of 4).
ERIC Educational Resources Information Center
Fleck, Stephen J.; Kraerner, William J.
1988-01-01
Resistance training causes a variety of physiological reactions, including changes in muscle size, connective tissue size, and bone mineral content. This article summarizes data from a variety of studies and research. (JL)
49 CFR 232.213 - Extended haul trains.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF TRANSPORTATION BRAKE SYSTEM SAFETY STANDARDS FOR FREIGHT AND OTHER NON-PASSENGER TRAINS AND... extended haul trains will originate and a description of the trains that will be operated as extended haul.... (5) The train shall have no more than one pick-up and one set-out en route, except for the set-out of...
NASA Technical Reports Server (NTRS)
Hoffer, R. M. (Principal Investigator); Knowlton, D. J.; Dean, M. E.
1981-01-01
A set of training statistics for the 30 meter resolution simulated thematic mapper MSS data was generated based on land use/land cover classes. In addition to this supervised data set, a nonsupervised multicluster block of training statistics is being defined in order to compare the classification results and evaluate the effect of the different training selection methods on classification performance. Two test data sets, defined using a stratified sampling procedure incorporating a grid system with dimensions of 50 lines by 50 columns, and another set based on an analyst supervised set of test fields were used to evaluate the classifications of the TMS data. The supervised training data set generated training statistics, and a per point Gaussian maximum likelihood classification of the 1979 TMS data was obtained. The August 1980 MSS data was radiometrically adjusted. The SAR data was redigitized and the SAR imagery was qualitatively analyzed.
Child health in low-resource settings: pathways through UK paediatric training.
Goenka, Anu; Magnus, Dan; Rehman, Tanya; Williams, Bhanu; Long, Andrew; Allen, Steve J
2013-11-01
UK doctors training in paediatrics benefit from experience of child health in low-resource settings. Institutions in low-resource settings reciprocally benefit from hosting UK trainees. A wide variety of opportunities exist for trainees working in low-resource settings including clinical work, research and the development of transferable skills in management, education and training. This article explores a range of pathways for UK trainees to develop experience in low-resource settings. It is important for trainees to start planning a robust rationale early for global child health activities via established pathways, in the interests of their own professional development as well as UK service provision. In the future, run-through paediatric training may include core elements of global child health, as well as designated 'tracks' for those wishing to develop their career in global child health further. Hands-on experience in low-resource settings is a critical component of these training initiatives.
Storkel, Holly L; Bontempo, Daniel E; Pak, Natalie S
2014-10-01
In this study, the authors investigated adult word learning to determine how neighborhood density and practice across phonologically related training sets influence online learning from input during training versus offline memory evolution during no-training gaps. Sixty-one adults were randomly assigned to learn low- or high-density nonwords. Within each density condition, participants were trained on one set of words and then were trained on a second set of words, consisting of phonological neighbors of the first set. Learning was measured in a picture-naming test. Data were analyzed using multilevel modeling and spline regression. Steep learning during input was observed, with new words from dense neighborhoods and new words that were neighbors of recently learned words (i.e., second-set words) being learned better than other words. In terms of memory evolution, large and significant forgetting was observed during 1-week gaps in training. Effects of density and practice during memory evolution were opposite of those during input. Specifically, forgetting was greater for high-density and second-set words than for low-density and first-set words. High phonological similarity, regardless of source (i.e., known words or recent training), appears to facilitate online learning from input but seems to impede offline memory evolution.
Money for nothing? The net costs of medical training.
Barros, Pedro P; Machado, Sara R
2010-09-01
One of the stages of medical training is the residency programme. Hosting institutions often claim compensation for the training provided. How much should this compensation be? According to our results, given the benefits arising from having residents among the house staff, no transfer (either tuition fee or subsidy) should be set to compensate the hosting institution for providing medical training. This paper quantifies the net costs of medical training, defined as the training costs over and above the wage paid. We jointly consider two effects. On the one hand, residents take extra time and resources from both the hosting institution and the supervisor. On the other hand, residents can be regarded as a less expensive substitute to nurses and/or graduate physicians, in the production of health care, both in primary care centres and hospitals. The net effect can be either positive or negative. We use the fact that residents, in Portugal, are centrally allocated to National Health Service hospitals to treat them as a fixed exogenous production factor. The data used comes from Portuguese hospitals and primary care centres. Cost function estimates point to a small negative marginal impact of residents on hospitals' (-0.02%) and primary care centres' (-0.9%) costs. Nonetheless, there is a positive relation between size and cost to the very large hospitals and primary care centres. Our approach to estimation of residents' costs controls for other teaching activities hospitals might have (namely undergraduate Medical Schools). Overall, the net costs of medical training appear to be quite small.
Rohde, Katja; Papiorek, Sarah; Lunau, Klaus
2013-03-01
Differences in the concentration of pigments as well as their composition and spatial arrangement cause intraspecific variation in the spectral signature of flowers. Known colour preferences and requirements for flower-constant foraging bees predict different responses to colour variability. In experimental settings, we simulated small variations of unicoloured petals and variations in the spatial arrangement of colours within tricoloured petals using artificial flowers and studied their impact on the colour choices of bumblebees and honeybees. Workers were trained to artificial flowers of a given colour and then given the simultaneous choice between three test colours: either the training colour, one colour of lower and one of higher spectral purity, or the training colour, one colour of lower and one of higher dominant wavelength; in all cases the perceptual contrast between the training colour and the additional test colours was similarly small. Bees preferred artificial test flowers which resembled the training colour with the exception that they preferred test colours with higher spectral purity over trained colours. Testing the behaviour of bees at artificial flowers displaying a centripetal or centrifugal arrangement of three equally sized colours with small differences in spectral purity, bees did not prefer any type of artificial flowers, but preferentially choose the most spectrally pure area for the first antenna contact at both types of artificial flowers. Our results indicate that innate preferences for flower colours of high spectral purity in pollinators might exert selective pressure on the evolution of flower colours.
ERIC Educational Resources Information Center
Anoka-Hennepin Technical Coll., Minneapolis, MN.
This set of two training outlines and one basic skills set list are designed for a machine tool technology program developed during a project to retrain defense industry workers at risk of job loss or dislocation because of conversion of the defense industry. The first troubleshooting training outline lists the categories of problems that develop…
Zatorre, Robert J.; Delhommeau, Karine; Zarate, Jean Mary
2012-01-01
We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019
More is More: The Relationship between Vocabulary Size and Word Extension
ERIC Educational Resources Information Center
Thom, Emily E.; Sandhofer, Catherine M.
2009-01-01
This study experimentally tested the relationship between children's lexicon size and their ability to learn new words within the domain of color. We manipulated the size of 25 20-month-olds' color lexicons by training them with two, four, or six different color words over the course of eight training sessions. We subsequently tested children's…
ERIC Educational Resources Information Center
Coetzer, Alan; Redmond, Janice; Sharafizad, Jalleh
2012-01-01
Employees in small and medium-sized enterprises (SMEs) form part of a "disadvantaged" group within the workforce that receives less access to training and development (T&D) than employees in large firms. Prior research into reasons for the relatively low levels of employee participation in training and development has typically…
Beyer, Kyle S; Fukuda, David H; Boone, Carleigh H; Wells, Adam J; Townsend, Jeremy R; Jajtner, Adam R; Gonzalez, Adam M; Fragala, Maren S; Hoffman, Jay R; Stout, Jeffrey R
2016-05-01
Short-term unilateral resistance training results in cross education of strength without changes in muscle size, activation, or endocrine response. J Strength Cond Res 30(5): 1213-1223, 2016-The purpose of this study was to assess the cross education of strength and changes in the underlying mechanisms (muscle size, activation, and hormonal response) after a 4-week unilateral resistance training (URT) program. A group of 9 untrained men completed a 4-week URT program on the dominant leg (DOM), whereas cross education was measured in the nondominant leg (NON); and were compared with a control group (n = 8, CON). Unilateral isometric force (PKF), leg press (LP) and leg extension (LE) strength, muscle size (by ultrasonography) and activation (by electromyography) of the rectus femoris and vastus lateralis, and the hormonal response (testosterone, growth hormone, insulin, and insulin-like growth factor-1) were tested pretraining and posttraining. Group × time interactions were present for PKF, LP, LE, and muscle size in DOM and for LP in NON. In all interactions, the URT group improved significantly better than CON. There was a significant acute hormonal response to URT, but no chronic adaptation after the 4-week training program. Four weeks of URT resulted in an increase in strength and size of the trained musculature, and cross education of strength in the untrained musculature, which may occur without detectable changes in muscle size, activation, or the acute hormonal response.
Smartphone-Based System for Learning and Inferring Hearing Aid Settings
Aldaz, Gabriel; Puria, Sunil; Leifer, Larry J.
2017-01-01
Background Previous research has shown that hearing aid wearers can successfully self-train their instruments’ gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the “untrained system,” that is, the manufacturer’s algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The “trained system” first learned each individual’s preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). Purpose To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. Research Design An experimental within-participants study. Participants used a prototype hearing system—comprising two hearing aids, Android smartphone, and body-worn gateway device—for ~6 weeks. Study Sample Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Intervention Participants were fitted and instructed to perform daily comparisons of settings (“listening evaluations”) through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone—including environmental sound classification, sound level, and location—to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system (“trained settings”) to those suggested by the hearing aids’ untrained system (“untrained settings”). Data Collection and Analysis We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Results Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. Conclusions The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone. PMID:27718350
Gordt, Katharina; Gerhardy, Thomas; Najafi, Bijan; Schwenk, Michael
2018-01-01
Wearable sensors (WS) can accurately measure body motion and provide interactive feedback for supporting motor learning. This review aims to summarize current evidence for the effectiveness of WS training for improving balance, gait and functional performance. A systematic literature search was performed in PubMed, Cochrane, Web of Science, and CINAHL. Randomized controlled trials (RCTs) using a WS exercise program were included. Study quality was examined by the PEDro scale. Meta-analyses were conducted to estimate the effects of WS balance training on the most frequently reported outcome parameters. Eight RCTs were included (Parkinson n = 2, stroke n = 1, Parkinson/stroke n = 1, peripheral neuropathy n = 2, frail older adults n = 1, healthy older adults n = 1). The sample size ranged from n = 20 to 40. Three types of training paradigms were used: (1) static steady-state balance training, (2) dynamic steady-state balance training, which includes gait training, and (3) proactive balance training. RCTs either used one type of training paradigm (type 2: n = 1, type 3: n = 3) or combined different types of training paradigms within their intervention (type 1 and 2: n = 2; all types: n = 2). The meta-analyses revealed significant overall effects of WS training on static steady-state balance outcomes including mediolateral (eyes open: Hedges' g = 0.82, CI: 0.43-1.21; eyes closed: g = 0.57, CI: 0.14-0.99) and anterior-posterior sway (eyes open: g = 0.55, CI: 0.01-1.10; eyes closed: g = 0.44, CI: 0.02-0.86). No effects on habitual gait speed were found in the meta-analysis (g = -0.19, CI: -0.68 to 0.29). Two RCTs reported significant improvements for selected gait variables including single support time, and fast gait speed. One study identified effects on proactive balance (Alternate Step Test), but no effects were found for the Timed Up and Go test and the Berg Balance Scale. Two studies reported positive results on feasibility and usability. Only one study was performed in an unsupervised setting. This review provides evidence for a positive effect of WS training on static steady-state balance in studies with usual care controls and studies with conventional balance training controls. Specific gait parameters and proactive balance measures may also be improved by WS training, yet limited evidence is available. Heterogeneous training paradigms, small sample sizes, and short intervention durations limit the validity of our findings. Larger studies are required for estimating the true potential of WS technology. © 2017 S. Karger AG, Basel.
An Observational Study of Group Waterpipe Use in a Natural Environment
2014-01-01
Introduction: To date research on tobacco smoking with a waterpipe (hookah, narghile, and shisha) has focused primarily on the individual user in a laboratory setting. Yet, waterpipe tobacco smoking is often a social practice that occurs in cafés, homes, and other natural settings. This observational study examined the behavior of waterpipe tobacco smokers and the social and contextual features of waterpipe use among groups in their natural environment. Methods: Trained observers visited urban waterpipe cafés on multiple occasions during an 8-month period. Observations of 241 individual users in naturally formed groups were made on smoking topography (puff frequency, duration, and interpuff interval [IPI]) and engagement in other activities (e.g., food and drink consumption, other tobacco use, and media viewing). Results: Most users were male in group sizes of 3–4 persons, on average, and each table had 1 waterpipe, on average. The predominant social features during observational periods were conversation and nonalcoholic drinking. Greater puff number was associated with smaller group sizes and more waterpipes per group, while longer IPIs were associated with larger group sizes and fewer waterpipes per group. Additionally, greater puff frequency was observed during media viewing and in the absence of other tobacco use. Conclusions: Overall, the results suggest that waterpipe smoking behavior is affected by group size and by certain social activities. Discussion focuses on how these findings enhance our understanding of factors that may influence exposure to waterpipe tobacco smoke toxicants in naturalistic environments. PMID:23943842
Four Weeks of Nordic Hamstring Exercise Reduce Muscle Injury Risk Factors in Young Adults.
Ribeiro-Alvares, João Breno; Marques, Vanessa B; Vaz, Marco A; Baroni, Bruno M
2018-05-01
Ribeiro-Alvares, JB, Marques, VB, Vaz, MA, and Baroni, BM. Four weeks of Nordic hamstring exercise reduce muscle injury risk factors in young adults. J Strength Cond Res 32(5): 1254-1262, 2018-The Nordic hamstring exercise (NHE) is a field-based exercise designed for knee-flexor eccentric strengthening, aimed at prevention of muscle strains. However, possible effects of NHE programs on other hamstring injury risk factors remain unclear. The purpose of this study was to investigate the effects of a NHE training program on multiple hamstring injury risk factors. Twenty physically active young adults were allocated into 2 equal-sized groups: control group (CG) and training group (TG). The TG was engaged in a 4-week NHE program, twice a week, 3 sets of 6-10 repetitions; while CG received no exercise intervention. The knee flexor and extensor strength were assessed through isokinetic dynamometry, the biceps femoris long head muscle architecture through ultrasound images, and the hamstring flexibility through sit-and-reach test. The results showed that CG subjects had no significant change in any outcome. TG presented higher percent changes than CG for hamstring isometric peak torque (9%; effect size [ES] = 0.27), eccentric peak torque (13%; ES = 0.60), eccentric work (18%; ES = 0.86), and functional hamstring-to-quadriceps torque ratio (13%; ES = 0.80). The NHE program led also to increased fascicle length (22%; ES = 2.77) and reduced pennation angle (-17%; ES = 1.27) in biceps femoris long head of the TG, without significant changes on muscle thickness. In conclusion, a short-term NHE training program (4 weeks; 8 training sessions) counteracts multiple hamstring injury risk factors in physically active young adults.
Van Scoy, Lauren Jodi; Watson-Martin, Elizabeth; Bohr, Tiffany A; Levi, Benjamin H; Green, Michael J
2018-04-01
Discussing end-of-life issues with patients is an essential role for chaplains. Few tools are available to help chaplains-in-training develop end-of-life communication skills. This study aimed to determine whether playing an end-of-life conversation game increases the confidence for chaplain-in-trainings to discuss end-of-life issues with patients. We used a convergent mixed methods design. Chaplains-in-training played the end-of-life conversation game twice over 2 weeks. For each game, pre- and postgame questionnaires measured confidence discussing end-of-life issues with patients and emotional affect. Between games, chaplains-in-training discussed end-of-life issues with an inpatient. One week after game 2, chaplains-in-training were individually interviewed. Quantitative data were analyzed using descriptive statistics and Wilcoxon rank-sum t tests. Content analysis identified interview themes. Quantitative and qualitative data sets were then integrated using a joint display. Twenty-three chaplains-in-training (52% female; 87% Caucasian; 70% were in year 1 of training) completed the study. Confidence scores (scale: 15-75; 75 = very confident) increased significantly after each game, increasing by 10.0 points from pregame 1 to postgame 2 ( P < .001). Positive affect subscale scores also increased significantly after each game, and shyness subscale scores decreased significantly after each game. Content analysis found that chaplains-in-training found the game to be a positive, useful experience and reported that playing twice was beneficial (not redundant). Mixed methods analysis suggest that an end-of-life conversation game is a useful tool that can increase chaplain-in-trainings' confidence for initiating end-of-life discussions with patients. A larger sample size is needed to confirm these findings.
Gieβsing, Jùrgen; Fisher, James; Steele, James; Rothe, Frank; Raubold, Kristin; Eichmann, Björn
2016-03-01
This study examined low-volume resistance training (RT) in trained participants with and without advanced training methods. Trained participants (RT experience 4±3 years) were randomised to groups performing single-set RT: ssRM (N.=21) performing repetitions to self-determined repetition maximum (RM), ssMMF (N.=30) performing repetitions to momentary muscular failure (MMF), and ssRP (N.=28) performing repetitions to self-determined RM using a rest pause (RP) method. Each performed supervised RT twice/week for 10 weeks. Outcomes included maximal isometric strength and body composition using bioelectrical impedance analysis. The ssRM group did not significantly improve in any outcome. The ssMMF and ssRP groups both significantly improved strength (p < 0.05). Magnitude of changes using effect size (ES) was examined between groups. Strength ES's were considered large for ssMMF (0.91 to 1.57) and ranging small to large for ssRP (0.42 to 1.06). Body composition data revealed significant improvements (P<0.05) in muscle and fat mass and percentages for whole body, upper limbs and trunk for ssMMF, but only upper limbs for ssRP. Body composition ES's ranged moderate to large for ssMMF (0.56 to 1.27) and ranged small to moderate for ssRP (0.28 to 0.52). ssMMF also significantly improved (P<0.05) total abdominal fat and increased intracellular water with moderate ES's (-0.62 and 0.56, respectively). Training to self-determined RM is not efficacious for trained participants. Training to MMF produces greatest improvements in strength and body composition, however, RP style training does offer some benefit.
Issues in the Development and Evaluation of Cross-Cultural Training in a Business Setting.
ERIC Educational Resources Information Center
Broadbooks, Wendy J.
Issues in the development and evaluation of cross-cultural training in a business setting were investigated. Cross-cultural training and cross-cultural evaluation were defined as training and evaluation of training that involve the interaction of participants from two or more different countries. Two evaluations of a management development-type…
ERIC Educational Resources Information Center
Quintino, Luisa
An evaluation was made of the training needs of the small and medium-sized enterprises (SMEs) in Portugal, Spain, Greece, and Italy and the potential of open, distance, flexible, and multimedia learning to meet those needs. The methodology included contacts with training providers, governmental institutions, and SMEs and circulation of…
O'Keefe, Kaitlin A; Shafir, Shira C; Shoaf, Kimberley I
2013-01-01
Local health departments (LHDs) must have sufficient numbers of staff functioning in an epidemiologic role with proper education, training, and skills to protect the health of communities they serve. This pilot study was designed to describe the composition, training, and competency level of LHD staff and examine the hypothesis that potential disparities exist between LHDs serving different sized populations. Cross-sectional surveys were conducted with directors and epidemiologic staff from a sample of 100 LHDs serving jurisdictions of varied sizes. Questionnaires included inquiries regarding staff composition, education, training, and measures of competency modeled on previously conducted studies by the Council of State and Territorial Epidemiologists. Number of epidemiologic staff, academic degree distribution, epidemiologic training, and both director and staff confidence in task competencies were calculated for each LHD size strata. Disparities in measurements were observed in LHDs serving different sized populations. LHDs serving small populations reported a smaller average number of epidemiologic staff than those serving larger jurisdictions. As size of population served increased, percentages of staff and directors holding bachelors' and masters' degrees increased, while those holding RN degrees decreased. A higher degree of perceived competency of staff in most task categories was reported in LHDs serving larger populations. LHDs serving smaller populations reported fewer epidemiologic staff, therefore might benefit from additional resources. Differences observed in staff education, training, and competencies suggest that enhanced epidemiologic training might be particularly needed in LHDs serving smaller populations. RESULTS can be used as a baseline for future research aimed at identifying areas where training and personnel resources might be particularly needed to increase the capabilities of LHDs.
Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan
2017-09-01
While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.
Mobile learning for HIV/AIDS healthcare worker training in resource-limited settings.
Zolfo, Maria; Iglesias, David; Kiyan, Carlos; Echevarria, Juan; Fucay, Luis; Llacsahuanga, Ellar; de Waard, Inge; Suàrez, Victor; Llaque, Walter Castillo; Lynen, Lutgarde
2010-09-08
We present an innovative approach to healthcare worker (HCW) training using mobile phones as a personal learning environment.Twenty physicians used individual Smartphones (Nokia N95 and iPhone), each equipped with a portable solar charger. Doctors worked in urban and peri-urban HIV/AIDS clinics in Peru, where almost 70% of the nation's HIV patients in need are on treatment. A set of 3D learning scenarios simulating interactive clinical cases was developed and adapted to the Smartphones for a continuing medical education program lasting 3 months. A mobile educational platform supporting learning events tracked participant learning progress. A discussion forum accessible via mobile connected participants to a group of HIV specialists available for back-up of the medical information. Learning outcomes were verified through mobile quizzes using multiple choice questions at the end of each module. In December 2009, a mid-term evaluation was conducted, targeting both technical feasibility and user satisfaction. It also highlighted user perception of the program and the technical challenges encountered using mobile devices for lifelong learning. With a response rate of 90% (18/20 questionnaires returned), the overall satisfaction of using mobile tools was generally greater for the iPhone. Access to Skype and Facebook, screen/keyboard size, and image quality were cited as more troublesome for the Nokia N95 compared to the iPhone. Training, supervision and clinical mentoring of health workers are the cornerstone of the scaling up process of HIV/AIDS care in resource-limited settings (RLSs). Educational modules on mobile phones can give flexibility to HCWs for accessing learning content anywhere. However lack of softwares interoperability and the high investment cost for the Smartphones' purchase could represent a limitation to the wide spread use of such kind mLearning programs in RLSs.
Attiyeh, Marc A; Chakraborty, Jayasree; Doussot, Alexandre; Langdon-Embry, Liana; Mainarich, Shiana; Gönen, Mithat; Balachandran, Vinod P; D'Angelica, Michael I; DeMatteo, Ronald P; Jarnagin, William R; Kingham, T Peter; Allen, Peter J; Simpson, Amber L; Do, Richard K
2018-04-01
Pancreatic cancer is a highly lethal cancer with no established a priori markers of survival. Existing nomograms rely mainly on post-resection data and are of limited utility in directing surgical management. This study investigated the use of quantitative computed tomography (CT) features to preoperatively assess survival for pancreatic ductal adenocarcinoma (PDAC) patients. A prospectively maintained database identified consecutive chemotherapy-naive patients with CT angiography and resected PDAC between 2009 and 2012. Variation in CT enhancement patterns was extracted from the tumor region using texture analysis, a quantitative image analysis tool previously described in the literature. Two continuous survival models were constructed, with 70% of the data (training set) using Cox regression, first based only on preoperative serum cancer antigen (CA) 19-9 levels and image features (model A), and then on CA19-9, image features, and the Brennan score (composite pathology score; model B). The remaining 30% of the data (test set) were reserved for independent validation. A total of 161 patients were included in the analysis. Training and test sets contained 113 and 48 patients, respectively. Quantitative image features combined with CA19-9 achieved a c-index of 0.69 [integrated Brier score (IBS) 0.224] on the test data, while combining CA19-9, imaging, and the Brennan score achieved a c-index of 0.74 (IBS 0.200) on the test data. We present two continuous survival prediction models for resected PDAC patients. Quantitative analysis of CT texture features is associated with overall survival. Further work includes applying the model to an external dataset to increase the sample size for training and to determine its applicability.
Predicting Kenya Short Rains Using the Indian Ocean SST
NASA Astrophysics Data System (ADS)
Peng, X.; Albertson, J. D.; Steinschneider, S.
2017-12-01
The rainfall over the Eastern Africa is charaterized by the typical bimodal monsoon system. Literatures have shown that the monsoon system is closely connected with the large-scale atmospheric motion which is believed to be driven by sea surface temperature anomalies (SSTA). Therefore, we may make use of the predictability of SSTA in estimating future Easter Africa monsoon. In this study, we tried predict the Kenya short rains (Oct, Nov and Dec rainfall) based on the Indian Ocean SSTA. The Least Absolute Shrinkage and Selection Operator (LASSO) regression is used to avoid over-fitting issues. Models for different lead times are trained using a 28-year training set (2006-1979) and are tested using a 10-year test set (2007-2016). Satisfying prediciton skills are achieved at relatively long lead times (i.e., 8 and 10 months) in terms of correlation coefficient and sign accuracy. Unlike some of the previous work, the prediction models are obtained from a data-driven method. Limited predictors are selected for each model and can be used in understanding the underlying physical connection. Still, further investigation is needed since the sampling variability issue cannot be excluded due to the limited sample size.
Training a whole-book LSTM-based recognizer with an optimal training set
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Yousefi, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2018-04-01
Despite the recent progress in OCR technologies, whole-book recognition, is still a challenging task, in particular in case of old and historical books, that the unknown font faces or low quality of paper and print contributes to the challenge. Therefore, pre-trained recognizers and generic methods do not usually perform up to required standards, and usually the performance degrades for larger scale recognition tasks, such as of a book. Such reportedly low error-rate methods turn out to require a great deal of manual correction. Generally, such methodologies do not make effective use of concepts such redundancy in whole-book recognition. In this work, we propose to train Long Short Term Memory (LSTM) networks on a minimal training set obtained from the book to be recognized. We show that clustering all the sub-words in the book, and using the sub-word cluster centers as the training set for the LSTM network, we can train models that outperform any identical network that is trained with randomly selected pages of the book. In our experiments, we also show that although the sub-word cluster centers are equivalent to about 8 pages of text for a 101- page book, a LSTM network trained on such a set performs competitively compared to an identical network that is trained on a set of 60 randomly selected pages of the book.
Zinner, Christoph; Heilemann, Ilka; Kjendlie, Per-Ludvik; Holmberg, Hans-Christer; Mester, Joachim
2010-01-01
Training volume in swimming is usually very high when compared to the relatively short competition time. High-intensity interval training (HIIT) has been demonstrated to improve performance in a relatively short training period. The main purpose of the present study was to examine the effects of a 5-week HIIT versus high-volume training (HVT) in 9–11-year-old swimmers on competition performance, 100 and 2,000 m time (T100 m and T2,000 m), VO2peak and rate of maximal lactate accumulation (Lacmax). In a 5-week crossover study, 26 competitive swimmers with a mean (SD) age of 11.5 ± 1.4 years performed a training period of HIIT and HVT. Competition (P < 0.01; effect size = 0.48) and T2,000 m (P = 0.04; effect size = 0.21) performance increased following HIIT. No changes were found in T100 m (P = 0.20). Lacmax increased following HIIT (P < 0.01; effect size = 0.43) and decreased after HVT (P < 0.01; effect size = 0.51). VO2peak increased following both interventions (P < 0.05; effect sizes = 0.46–0.57). The increases in competition performance, T2,000 m, Lacmax and VO2peak following HIIT were achieved in significantly less training time (~2 h/week). PMID:20683609
Engel, Benjamin D; Ludington, William B; Marshall, Wallace F
2009-10-05
The assembly and maintenance of eukaryotic flagella are regulated by intraflagellar transport (IFT), the bidirectional traffic of IFT particles (recently renamed IFT trains) within the flagellum. We previously proposed the balance-point length control model, which predicted that the frequency of train transport should decrease as a function of flagellar length, thus modulating the length-dependent flagellar assembly rate. However, this model was challenged by the differential interference contrast microscopy observation that IFT frequency is length independent. Using total internal reflection fluorescence microscopy to quantify protein traffic during the regeneration of Chlamydomonas reinhardtii flagella, we determined that anterograde IFT trains in short flagella are composed of more kinesin-associated protein and IFT27 proteins than trains in long flagella. This length-dependent remodeling of train size is consistent with the kinetics of flagellar regeneration and supports a revised balance-point model of flagellar length control in which the size of anterograde IFT trains tunes the rate of flagellar assembly.
Timon, Rafael; Collado-Mateo, Daniel; Olcina, Guillermo; Gusi, Narcis
2016-03-01
Previous studies have demonstrated positive effects of acute vibration exercise on concentric strength and power, but few have observed the effects of vibration exposure on resistance training. The aim of this study was to verify the effects of whole body vibration applied to the chest via hands on bench press resistance training in trained and untrained individuals. Nineteen participants (10 recreationally trained bodybuilders and 9 untrained students) performed two randomized sessions of resistance training on separate days. Each strength session consisted of 3 bench press sets with a load of 75% 1RM to failure in each set, with 2 minutes' rest between sets. All subjects performed the same strength training with either, vibration exposure (12 Hz, 4 mm) of 30 seconds immediately before each bench press set or without vibration. Number of total repetitions, kinematic parameters, blood lactate and perceived exertion were analyzed. In the untrained group, vibration exposure caused a significant increase in the mean velocity (from 0.36±0.02 to 0.39±0.03 m/s) and acceleration (from 0.75±0.10 to 0.86±0.09 m/s2), as well as a decrease in perceived effort (from 8±0.57 to 7.35±0.47) in the first bench press set, but no change was observed in the third bench press set. In the recreationally trained bodybuilders, vibration exposure did not cause any improvement on the performance of bench press resistance training. These results suggest that vibration exposure applied just before the bench press exercise could be a good practice to be implemented by untrained individuals in resistance training.
Bayesian convolutional neural network based MRI brain extraction on nonhuman primates.
Zhao, Gengyan; Liu, Fang; Oler, Jonathan A; Meyerand, Mary E; Kalin, Ned H; Birn, Rasmus M
2018-07-15
Brain extraction or skull stripping of magnetic resonance images (MRI) is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains, but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research. To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining deep Bayesian convolutional neural network (CNN) and fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, Bayesian SegNet, is used as the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation, but also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from Bayesian SegNet in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning based methods with a mean Dice coefficient of 0.985 and a mean average symmetric surface distance of 0.220 mm. A better performance against all the compared methods was verified by statistical tests (all p-values < 10 -4 , two-sided, Bonferroni corrected). The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all the 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases. Copyright © 2018 Elsevier Inc. All rights reserved.
Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.
Ionescu, Catalin; Papava, Dragos; Olaru, Vlad; Sminchisescu, Cristian
2014-07-01
We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.
ERIC Educational Resources Information Center
Pukkinen, Tommi; Romijn, Clemens; Elson-Rogers, Sarah
There are three main parts to this report of a study that used case studies to showcase the different approaches used to encourage more continuing training within small and medium-sized enterprises (SMEs) across the European Union (EU). Section 1 discusses the importance of funding training in SMEs and highlights the various types of funding…
Morrow, S A; Bates, P E
1987-01-01
This study examined the effectiveness of three sets of school-based instructional materials and community training on acquisition and generalization of a community laundry skill by nine students with severe handicaps. School-based instruction involved artificial materials (pictures), simulated materials (cardboard replica of a community washing machine), and natural materials (modified home model washing machine). Generalization assessments were conducted at two different community laundromats, on two machines represented fully by the school-based instructional materials and two machines not represented fully by these materials. After three phases of school-based instruction, the students were provided ten community training trials in one laundromat setting and a final assessment was conducted in both the trained and untrained community settings. A multiple probe design across students was used to evaluate the effectiveness of the three types of school instruction and community training. After systematic training, most of the students increased their laundry performance with all three sets of school-based materials; however, generalization of these acquired skills was limited in the two community settings. Direct training in one of the community settings resulted in more efficient acquisition of the laundry skills and enhanced generalization to the untrained laundromat setting for most of the students. Results of this study are discussed in regard to the issue of school versus community-based instruction and recommendations are made for future research in this area.
Muscle synergy space: learning model to create an optimal muscle synergy
Alnajjar, Fady; Wojtara, Tytus; Kimura, Hidenori; Shimoda, Shingo
2013-01-01
Muscle redundancy allows the central nervous system (CNS) to choose a suitable combination of muscles from a number of options. This flexibility in muscle combinations allows for efficient behaviors to be generated in daily life. The computational mechanism of choosing muscle combinations, however, remains a long-standing challenge. One effective method of choosing muscle combinations is to create a set containing the muscle combinations of only efficient behaviors, and then to choose combinations from that set. The notion of muscle synergy, which was introduced to divide muscle activations into a lower-dimensional synergy space and time-dependent variables, is a suitable tool relevant to the discussion of this issue. The synergy space defines the suitable combinations of muscles, and time-dependent variables vary in lower-dimensional space to control behaviors. In this study, we investigated the mechanism the CNS may use to define the appropriate region and size of the synergy space when performing skilled behavior. Two indices were introduced in this study, one is the synergy stability index (SSI) that indicates the region of the synergy space, the other is the synergy coordination index (SCI) that indicates the size of the synergy space. The results on automatic posture response experiments show that SSI and SCI are positively correlated with the balance skill of the participants, and they are tunable by behavior training. These results suggest that the CNS has the ability to create optimal sets of efficient behaviors by optimizing the size of the synergy space at the appropriate region through interacting with the environment. PMID:24133444
ERIC Educational Resources Information Center
Frost, Jørgen; Ottem, Ernst; Hagtvet, Bente E.; Snow, Catherine E.
2016-01-01
In the present study, 81 Norwegian students were taught the meaning of words by the Word Generation (WG) method and 51 Norwegian students were taught by an approach inspired by the Thinking Schools (TS) concept. Two sets of words were used: a set of words to be trained and a set of non-trained control words. The two teaching methods yielded no…
ERIC Educational Resources Information Center
Rahyuda, Agoes Ganesha; Soltani, Ebrahim; Syed, Jawad
2018-01-01
Based on a review of the literature on post-training transfer interventions, this paper offers a conceptual model that elucidates potential mechanisms through which two types of post-training transfer intervention (relapse prevention and proximal plus distal goal setting) influence the transfer of training. We explain how the application of…
Neuromuscular adaptations induced by adjacent joint training.
Ema, R; Saito, I; Akagi, R
2018-03-01
Effects of resistance training are well known to be specific to tasks that are involved during training. However, it remains unclear whether neuromuscular adaptations are induced after adjacent joint training. This study examined the effects of hip flexion training on maximal and explosive knee extension strength and neuromuscular performance of the rectus femoris (RF, hip flexor, and knee extensor) compared with the effects of knee extension training. Thirty-seven untrained young men were randomly assigned to hip flexion training, knee extension training, or a control group. Participants in the training groups completed 4 weeks of isometric hip flexion or knee extension training. Standardized differences in the mean change between the training groups and control group were interpreted as an effect size, and the substantial effect was assumed to be ≥0.20 of the between-participant standard deviation at baseline. Both types of training resulted in substantial increases in maximal (hip flexion training group: 6.2% ± 10.1%, effect size = 0.25; knee extension training group: 20.8% ± 9.9%, effect size = 1.11) and explosive isometric knee extension torques and muscle thickness of the RF in the proximal and distal regions. Improvements in strength were accompanied by substantial enhancements in voluntary activation, which was determined using the twitch interpolation technique and RF activation. Differences in training effects on explosive torques and neural variables between the two training groups were trivial. Our findings indicate that hip flexion training results in substantial neuromuscular adaptations during knee extensions similar to those induced by knee extension training. © 2017 The Authors. Scandinavian Journal of Medicine & Science In Sports Published by John Wiley & Sons Ltd.
Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B
2013-12-01
Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication is that by using such 'efficient' sampling methods for obtaining an initial sample of fruit that represents the variability of the population and for sub-sampling to form training and test sets it should be possible to use relatively small sample sizes to develop spectral predictions of fruit quality. Using feature selection and elastic net appears to improve the SSC model performance in terms of R(2), RMSECV and RMSEP for 'Aroma' apples. © 2013 Society of Chemical Industry.
An ecological evaluation of the metabolic benefits due to robot-assisted gait training.
Peri, E; Biffi, E; Maghini, C; Marzorati, M; Diella, E; Pedrocchi, A; Turconi, A C; Reni, G
2015-08-01
Cerebral palsy (CP), one of the most common neurological disorders in childhood, features affected individual's motor skills and muscle actions. This results in elevated heart rate and rate of oxygen uptake during sub-maximal exercise, thus indicating a mean energy expenditure higher than healthy subjects. Rehabilitation, currently involving also robot-based devices, may have an impact also on these aspects. In this study, an ecological setting has been proposed to evaluate the energy expenditure of 4 children with CP before and after a robot-assisted gait training. Even if the small sample size makes it difficult to give general indications, results presented here are promising. Indeed, children showed an increasing trend of the energy expenditure per minute and a decreasing trend of the energy expenditure per step, in accordance to the control group. These data suggest a metabolic benefit of the treatment that may increase the locomotion efficiency of disabled children.
Thonse, Umesh; Behere, Rishikesh V; Frommann, Nicole; Sharma, Psvn
2018-01-01
Social cognition refers to mental operations involved in processing of social cues and includes the domains of emotion processing, Theory of Mind (ToM), social perception, social knowledge and attributional bias. Significant deficits in ToM, emotion perception and social perception have been demonstrated in schizophrenia which can have an impact on socio-occupational functioning. Intervention modules for social cognition have demonstrated moderate effect sizes for improving emotion identification and discrimination. We describe the Indian version of the Training of Affect Recognition (TAR) program and a pilot study to demonstrate the feasibility of administering this intervention program in the Indian population. We also discuss the cultural sensibilities in adopting an intervention program for the Indian setting. To the best of our knowledge this is the first intervention program for social cognition for use in persons with schizophrenia in India. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparison of molecular breeding values based on within- and across-breed training in beef cattle.
Kachman, Stephen D; Spangler, Matthew L; Bennett, Gary L; Hanford, Kathryn J; Kuehn, Larry A; Snelling, Warren M; Thallman, R Mark; Saatchi, Mahdi; Garrick, Dorian J; Schnabel, Robert D; Taylor, Jeremy F; Pollak, E John
2013-08-16
Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set.
Dog and human inflammatory bowel disease rely on overlapping yet distinct dysbiosis networks.
Vázquez-Baeza, Yoshiki; Hyde, Embriette R; Suchodolski, Jan S; Knight, Rob
2016-10-03
Inflammatory bowel disease (IBD) is an autoimmune condition that is difficult to diagnose, and animal models of this disease have questionable human relevance 1 . Here, we show that the dysbiosis network underlying IBD in dogs differs from that in humans, with some bacteria such as Fusobacterium switching roles between the two species (as Bacteroides fragilis switches roles between humans and mice) 2 . For example, a dysbiosis index trained on humans fails when applied to dogs, but a dog-specific dysbiosis index achieves high correlations with the overall dog microbial community diversity patterns. In addition, a random forest classifier trained on dog-specific samples achieves high discriminatory power, even when using stool samples rather than the mucosal biopsies required for high discriminatory power in humans 2 . These relationships were not detected in previously published dog IBD data sets due to their limited sample size and statistical power 3 . Taken together, these results reveal the need to train host-specific dysbiosis networks and point the way towards a generalized understanding of IBD across different mammalian models.
A New Experiment on Bengali Character Recognition
NASA Astrophysics Data System (ADS)
Barman, Sumana; Bhattacharyya, Debnath; Jeon, Seung-Whan; Kim, Tai-Hoon; Kim, Haeng-Kon
This paper presents a method to use View based approach in Bangla Optical Character Recognition (OCR) system providing reduced data set to the ANN classification engine rather than the traditional OCR methods. It describes how Bangla characters are processed, trained and then recognized with the use of a Backpropagation Artificial neural network. This is the first published account of using a segmentation-free optical character recognition system for Bangla using a view based approach. The methodology presented here assumes that the OCR pre-processor has presented the input images to the classification engine described here. The size and the font face used to render the characters are also significant in both training and classification. The images are first converted into greyscale and then to binary images; these images are then scaled to a fit a pre-determined area with a fixed but significant number of pixels. The feature vectors are then formed extracting the characteristics points, which in this case is simply a series of 0s and 1s of fixed length. Finally, an artificial neural network is chosen for the training and classification process.
Young workers in the construction industry and initial OSH-training when entering work life.
Holte, Kari Anne; Kjestveit, Kari
2012-01-01
Studies have found that young workers are at risk for injuries. The risk for accidents is high within construction, indicating that young workers may be especially vulnerable in this industry. In Norway, it is possible to enter the construction industry as a full time worker at the age of 18. The aim of this paper was to explore how young construction workers are received at their workplace with regards to OHS-training. The study was designed as a qualitative case study. Each case consisted of a young worker or apprentice (< 25 years), a colleague, the immediate superior, the OHS manager, and a safety representative in the company. The interviews were recorded and analyzed through content analysis. The results showed that there were differences between large and small companies, where large companies had more formalized routines and systems for receiving and training young workers. These routines were however more dependent on requirements set by legislators and contractors more than by company size, since the legislation has different requirements with impact on OHS.
Modeling initiation trains based on HMX and TATB
NASA Astrophysics Data System (ADS)
Drake, R. C.; Maisey, M.
2017-01-01
There will always be a requirement to reduce the size of initiation trains. However, as the size is reduced the performance characteristics can be compromised. A detailed science-based understanding of the processes (ignition and growth to detonation) which determine the performance characteristics is required to enable compact and robust initiation trains to be designed. To assess the use of numerical models in the design of initiation trains a modeling study has been undertaken, with the aim of understanding the initiation of TATB and HMX charges by a confined, surface mounted detonator. The effect of detonator diameter and detonator confinement on the formation of dead zones in the acceptor explosives has been studied. The size of dead zones can be reduced by increasing the diameter of the detonator and by increasing the impedance of the confinement. The implications for the design of initiation trains are discussed.
The personal assistance workforce: trends in supply and demand.
Kaye, H Stephen; Chapman, Susan; Newcomer, Robert J; Harrington, Charlene
2006-01-01
The workforce providing noninstitutional personal assistance and home health services tripled between 1989 and 2004, according to U.S. survey data, growing at a much faster rate than the population needing such services. During the same period, Medicaid spending for such services increased dramatically, while both workforce size and spending for similar services in institutional settings remained relatively stable. Low wage levels for personal assistance workers, which have fallen behind those of comparable occupations; scarce health benefits; and high job turnover rates highlight the need for greater attention to ensuring a stable and well-trained workforce to meet growing demand.
Machine learning for many-body physics: The case of the Anderson impurity model
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; ...
2014-10-31
We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
Developing a Neural Network to Act as a Noise Filter
1992-10-02
s... ... j.. .3885*I888+.. I. . . fff~8*I*8s+ ... i £888&aiz$$88 1. *. z*I* sseI . .** m $as+.’ . . .seEgzsssas* ... .. +8891MI1$883< I. .. (*8...6: Architectures that provided the best results Using a Training Set of 5 Samples (no bias) Size of] m 1 ax 1 lumber Learning1 PatternI Overlap...Riddle IAbs I RfAS I of I Time Layer Layer Layer Error Error _ycles __(Seel_ m 5 x 4 (0,21 5 x 9 0.100 0.039 132 1043.0 lone of the neural netvork
Machine learning for many-body physics: The case of the Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole
We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
Effects of draught load exercise and training on calcium homeostasis in horses.
Vervuert, I; Coenen, M; Zamhöfer, J
2005-01-01
This study was conducted to investigate the effects of draught load exercise on calcium (Ca) homeostasis in young horses. Five 2-year-old untrained Standardbred horses were studied in a 4-month training programme. All exercise workouts were performed on a treadmill at a 6% incline and with a constant draught load of 40 kg (0.44 kN). The training programme started with a standardized exercise test (SET 1; six incremental steps of 5 min duration each, first step 1.38 m/s, stepwise increase by 0.56 m/s). A training programme was then initiated which consisted of low-speed exercise sessions (LSE; constant velocity at 1.67 m/s for 60 min, 48 training sessions in total). After the 16th and 48th LSE sessions, SETs (SET 2: middle of training period, SET 3: finishing training period) were performed again under the identical test protocol of SET 1. Blood samples for blood lactate, plasma total Ca, blood ionized calcium (Ca(2+)), blood pH, plasma inorganic phosphorus (P(i)) and plasma intact parathyroid hormone (PTH) were collected before, during and after SETs, and before and after the first, 16th, 32nd and 48th LSE sessions. During SETs there was a decrease in ionized Ca(2+) and a rise in lactate, P(i) and intact PTH. The LSEs resulted in an increase in pH and P(i), whereas lactate, ionized Ca(2+), total Ca and intact PTH were not affected. No changes in Ca metabolism were detected in the course of training. Results of this study suggest that the type of exercise influences Ca homeostasis and intact PTH response, but that these effects are not influenced in the course of the training period.
In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging
Ibrahim, Mohd Firdaus; Ahmad Sa’ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon
2016-01-01
The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t-test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass. PMID:27801799
In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging.
Ibrahim, Mohd Firdaus; Ahmad Sa'ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon
2016-10-27
The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t -test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass.
Skidmore, Elizabeth R.; Butters, Meryl; Whyte, Ellen; Grattan, Emily; Shen, Jennifer; Terhorst, Lauren
2016-01-01
Objective To examine the effects of direct skill training and guided training for promoting independence after stroke. Design Single-blind randomized pilot study. Setting Inpatient rehabilitation facility. Participants Forty-three participants in inpatient rehabilitation with acute stroke and cognitive impairments. Interventions Participants were randomized to receive direct skill training (n=22, 10 sessions as adjunct to usual inpatient rehabilitation) or guided training (n=21, same dose). Main Outcome Measure The Functional Independence Measure assessed independence at baseline, rehabilitation discharge, and months 3, 6, and 12. Results Linear mixed models (random intercept, other effects fixed) revealed a significant intervention by time interaction (F4,150=5.11, p<0.001), a significant main effect of time (F4,150=49.25, p<0.001), and a significant effect of stroke severity (F1,150=34.46, p<.001). There was no main effect of intervention (F1,150=0.07, p=0.79). Change in Functional Independence Measures scores was greater for the DIRECT group at rehabilitation discharge (effect size of between group differences, d=0.28) and greater for the GUIDE group at months 3 (d=0.16), 6 (d=0.39), and 12 (d=0.53). The difference between groups in mean 12 month change scores was 10.57 points. Conclusions Guided training, provided in addition to usual care, offered a small advantage in the recovery of independence, relative to direct skill training. Future studies examining guided training in combination with other potentially potent intervention elements may further advise best practices in rehabilitation for individuals with cognitive impairments after acute stroke. PMID:27794487
Bernard, Jean-Baptiste; Arunkumar, Amit; Chung, Susana T L
2012-08-01
In a previous study, Chung, Legge, and Cheung (2004) showed that training using repeated presentation of trigrams (sequences of three random letters) resulted in an increase in the size of the visual span (number of letters recognized in a glance) and reading speed in the normal periphery. In this study, we asked whether we could optimize the benefit of trigram training on reading speed by using trigrams more specific to the reading task (i.e., trigrams frequently used in the English language) and presenting them according to their frequencies of occurrence in normal English usage and observers' performance. Averaged across seven observers, our training paradigm (4 days of training) increased the size of the visual span by 6.44 bits, with an accompanied 63.6% increase in the maximum reading speed, compared with the values before training. However, these benefits were not statistically different from those of Chung, Legge, and Cheung (2004) using a random-trigram training paradigm. Our findings confirm the possibility of increasing the size of the visual span and reading speed in the normal periphery with perceptual learning, and suggest that the benefits of training on letter recognition and maximum reading speed may not be linked to the types of letter strings presented during training. Copyright © 2012 Elsevier Ltd. All rights reserved.
2014-01-01
Background Integrative medicine (IM) integrates evidence-based Complementary and Alternative Medicine (CAM) with conventional medicine (CON). Medical schools offer basic CAM electives but in postgraduate medical training (PGMT) little has been done for the integration of CAM. An exception to this is anthroposophic medicine (AM), a western form of CAM based on CON, offering an individualized holistic IM approach. AM hospitals are part of the public healthcare systems in Germany and Switzerland and train AM in PGMT. We performed the first quality evaluation of the subjectively perceived quality of this PGMT. Methods An anonymous full survey of all 214 trainers (TR) and 240 trainees (TE) in all 15 AM hospitals in Germany and Switzerland, using the ETHZ questionnaire for annual national PGMT assessments in Switzerland (CH) and Germany (D), complemented by a module for AM. Data analysis included Cronbach’s alpha to assess internal consistency questionnaire scales, 2-tailed Pearson correlation of specific quality dimensions of PGMT and department size, 2-tailed Wilcoxon Matched-Pair test for dependent variables and 2-tailed Mann–Whitney U-test for independent variables to calculate group differences. The level of significance was set at p < 0.05. Results Return rates were: D: TE 89/215 (41.39%), TR 78/184 (42.39%); CH: TE 19/25 (76%), TR 22/30 (73.33%). Cronbach’s alpha values for TE scales were >0.8 or >0.9, and >0.7 to >0.5 for TR scales. Swiss hospitals surpassed German ones significantly in Global Satisfaction with AM (TR and TE); Clinical Competency training in CON (TE) and AM (TE, TR), Error Management, Culture of Decision Making, Evidence-based Medicine, and Clinical Competency in internal medicine CON and AM (TE). When the comparison was restricted to departments of comparable size, differences remained significant for Clinical Competencies in AM (TE, TR), and Culture of Decision Making (TE). CON received better grades than AM in Global Satisfaction and Clinical Competency. Quality of PGMT depended on department size, working conditions and structural training features. Conclusion The lower quality of PGMT in German hospitals can be attributed to larger departments, more difficult working conditions, and less favorable structural features for PGMT in AM, possibly also in relation to increased financial pressure. PMID:24934998
Heusser, Peter; Eberhard, Sabine; Berger, Bettina; Weinzirl, Johannes; Orlow, Pascale
2014-06-16
Integrative medicine (IM) integrates evidence-based Complementary and Alternative Medicine (CAM) with conventional medicine (CON). Medical schools offer basic CAM electives but in postgraduate medical training (PGMT) little has been done for the integration of CAM. An exception to this is anthroposophic medicine (AM), a western form of CAM based on CON, offering an individualized holistic IM approach. AM hospitals are part of the public healthcare systems in Germany and Switzerland and train AM in PGMT. We performed the first quality evaluation of the subjectively perceived quality of this PGMT. An anonymous full survey of all 214 trainers (TR) and 240 trainees (TE) in all 15 AM hospitals in Germany and Switzerland, using the ETHZ questionnaire for annual national PGMT assessments in Switzerland (CH) and Germany (D), complemented by a module for AM. Data analysis included Cronbach's alpha to assess internal consistency questionnaire scales, 2-tailed Pearson correlation of specific quality dimensions of PGMT and department size, 2-tailed Wilcoxon Matched-Pair test for dependent variables and 2-tailed Mann-Whitney U-test for independent variables to calculate group differences. The level of significance was set at p < 0.05. Return rates were: D: TE 89/215 (41.39%), TR 78/184 (42.39%); CH: TE 19/25 (76%), TR 22/30 (73.33%). Cronbach's alpha values for TE scales were >0.8 or >0.9, and >0.7 to >0.5 for TR scales. Swiss hospitals surpassed German ones significantly in Global Satisfaction with AM (TR and TE); Clinical Competency training in CON (TE) and AM (TE, TR), Error Management, Culture of Decision Making, Evidence-based Medicine, and Clinical Competency in internal medicine CON and AM (TE). When the comparison was restricted to departments of comparable size, differences remained significant for Clinical Competencies in AM (TE, TR), and Culture of Decision Making (TE). CON received better grades than AM in Global Satisfaction and Clinical Competency. Quality of PGMT depended on department size, working conditions and structural training features. The lower quality of PGMT in German hospitals can be attributed to larger departments, more difficult working conditions, and less favorable structural features for PGMT in AM, possibly also in relation to increased financial pressure.
Team Training and Retention of Skills Acquired Above Real Time Training on a Flight Simulator
NASA Technical Reports Server (NTRS)
Ali, Syed Friasat; Guckenberger, Dutch; Crane, Peter; Rossi, Marcia; Williams, Mayard; Williams, Jason; Archer, Matt
2000-01-01
Above Real-Time Training (ARTT) is the training acquired on a real time simulator when it is modified to present events at a faster pace than normal. The experiments related to training of pilots performed by NASA engineers (Kolf in 1973, Hoey in 1976) and others (Guckenberger, Crane and their associates in the nineties) have shown that in comparison with the real time training (RTT), ARTT provides the following benefits: increased rate of skill acquisition, reduced simulator and aircraft training time, and more effective training for emergency procedures. Two sets of experiments have been performed; they are reported in professional conferences and the respective papers are included in this report. The retention of effects of ARTT has been studied in the first set of experiments and the use of ARTT as top-off training has been examined in the second set of experiments. In ARTT, the pace of events was 1.5 times the pace in RTT. In both sets of experiments, university students were trained to perform an aerial gunnery task. The training unit was equipped with a joystick and a throttle. The student acted as a nose gunner in a hypothetical two place attack aircraft. The flight simulation software was installed on a Universal Distributed Interactive Simulator platform supplied by ECC International of Orlando, Florida. In the first set of experiments, two training programs RTT or ART7 were used. Students were then tested in real time on more demanding scenarios: either immediately after training or two days later. The effects of ARTT did not decrease over a two day retention interval and ARTT was more time efficient than real time training. Therefore, equal test performance could be achieved with less clock-time spent in the simulator. In the second set of experiments three training programs RTT or ARTT or RARTT, were used. In RTT, students received 36 minutes of real time training. In ARTT, students received 36 minutes of above real time training. In RARTT, students received 18 minutes of real time training and 18 minutes of above real time training as top-off training. Students were then tested in real time on more demanding scenarios. The use of ARTT as top-off training after RTT offered better training than RTT alone or ARTT alone. It is, however, suggested that a similar experiment be conducted on a relatively more complex task with a larger sample of participants. Within the proposed duration of the research effort, the setting up of experiments and trial runs on using ARTT for team training were also scheduled but they could not be accomplished due to extra ordinary challenges faced in developing the required software configuration. Team training is, however, scheduled in a future study sponsored by NASA at Tuskegee University.
What shape do UK trainees want their training to be? Results of a cross-sectional study
Harries, Rhiannon L; Rashid, Mustafa; Smitham, Peter; Vesey, Alex; McGregor, Richard; Scheeres, Karl; Bailey, Jon; Sohaib, Syed Mohammed Afzal; Prior, Matthew; Frost, Jonathan; Al-Deeb, Walid; Kugathasan, Gana; Gokani, Vimal J
2016-01-01
Objectives The British Government is acting on recommendations to overhaul postgraduate training to meet the needs of the changing population, to produce generalist doctors undergoing shorter broad-based training (Greenaway Review). Only 45 doctors in training were involved in the consultation process. This study aims to obtain a focused perspective on the proposed reforms by doctors in training from across specialities. Design Prospective, questionnaire-based cross-sectional study. Setting/participants Following validation, a 31-item electronic questionnaire was distributed via trainee organisations and Postgraduate Local Education and Training Board (LETB) mailing lists. Throughout the 10-week study period, the survey was publicised on several social media platforms. Results Of the 3603 demographically representative respondents, 69% knew about proposed changes. Of the respondents, 73% expressed a desire to specialise, with 54% keen to provide general emergency cover. A small proportion (12%) stated that current training pathway length is too long, although 86% felt that it is impossible to achieve independent practitioner-level proficiency in a shorter period of time than is currently required. Opinions regarding credentialing were mixed, but tended towards disagreement. The vast majority (97%) felt credentialing should not be funded by doctors in training. Respondents preferred longer placement lengths with increasing career progression. Doctors in training value early generalised training (65%), with suggestions for further improvement. Conclusions This is the first large-scale cross-specialty study regarding the Shape of Training Review. Although there are recommendations which trainees support, it is clear that one size does not fit all. Most trainees are keen to provide a specialist service on an emergency generalist background. Credentialing is a contentious issue; however, we believe removing aspects from curricula into post-Certificate of Completion of Training (CCT) credentialing programmes with shortened specialty training routes only degrades the current consultant expertise, and does not serve the population. Educational needs, not political winds, should drive changes in postgraduate medical education and all stakeholders should be involved. PMID:27855084
García-López, David; de Paz, José A
2017-01-01
Abstract The aim of the study was to analyse the effects of 6 week (15 sessions) flywheel resistance training with eccentric-overload (FRTEO) on different functional and anatomical variables in professional handball players. Twenty-nine athletes were recruited and randomly divided into two groups. The experimental group (EXP, n = 15) carried out 15 sessions of FRTEO in the leg-press exercise, with 4 sets of 7 repetitions at a maximum-concentric effort. The control group (CON, n = 14) performed the same number of training sessions including 4 sets of 7 maximum repetitions (7RM) using a weight-stack leg-press machine. The results which were measured included maximal dynamic strength (1RM), muscle power at different submaximal loads (PO), vertical jump height (CMJ and SJ), 20 m sprint time (20 m), T-test time (T-test), and Vastus-Lateralis muscle (VL) thickness. The results of the EXP group showed a substantially better improvement (p < 0.05-0.001) in PO, CMJ, 20 m, T-test and VL, compared to the CON group. Moreover, athletes from the EXP group showed significant improvements concerning all the variables measured: 1RM (ES = 0.72), PO (ES = 0.42 - 0.83), CMJ (ES = 0.61), SJ (ES = 0.54), 20 m (ES = 1.45), T-test (ES = 1.44), and VL (ES = 0.63 - 1.64). Since handball requires repeated short, explosive effort such as accelerations and decelerations during sprints with changes of direction, these results suggest that FRTEO affects functional and anatomical changes in a way which improves performance in well-trained professional handball players. PMID:29339993
Experiences with global trigger tool reviews in five Danish hospitals: an implementation study
von Plessen, Christian; Kodal, Anne Marie; Anhøj, Jacob
2012-01-01
Objectives To describe experiences with the implementation of global trigger tool (GTT) reviews in five Danish hospitals and to suggest ways to improve the performance of GTT review teams. Design Retrospective observational study. Setting The measurement and monitoring of harms are crucial to campaigns to improve the safety of patients. Increasingly, teams use the GTT to review patient records and measure harms in English and non-English-speaking countries. Meanwhile, it is not clear as to how the method performs in such diverse settings. Participants Review teams from five Danish pilot hospitals of the national Danish Safer Hospital Programme. Primary and secondary outcome measures We collected harm rates, background and anecdotal information and reported patient safety incidents (PSIs) from five pilot hospitals currently participating in the Danish Safer Hospital Programme. Experienced reviewers categorised harms by type. We plotted harm rates as run-charts and applied rules for the detection of patterns of non-random variation. Results The hospitals differed in size but had similar patient populations and activity. PSIs varied between 3 and 12 per 1000 patient-days. The average harm rate for all hospitals was 60 per 1000 patient-days ranging from 34 to 84. The percentage of harmed patients was 25 and ranged from 18 to 33. Overall, 96% of harms were temporary. Infections, pressure ulcers procedure-related and gastrointestinal problems were common. Teams reported differences in training and review procedures such as the role of the secondary reviewer. Conclusions We found substantial variation in harm rates. Differences in training, review procedures and documentation in patient records probably contributed to these variations. Training reviewers as teams, specifying the roles of the different reviewers, training records and a database for findings of reviews may improve the application of the GTT. PMID:23065451
Enabling phenotypic big data with PheNorm.
Yu, Sheng; Ma, Yumeng; Gronsbell, Jessica; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Liao, Katherine P; Cai, Tianxi
2018-01-01
Electronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training. The most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification. We validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference. The accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Guo, Yanhui; Zhou, Chuan; Chan, Heang-Ping; Wei, Jun; Chughtai, Aamer; Sundaram, Baskaran; Hadjiiski, Lubomir M.; Patel, Smita; Kazerooni, Ella A.
2013-04-01
A 3D multiscale intensity homogeneity transformation (MIHT) method was developed to reduce false positives (FPs) in our previously developed CAD system for pulmonary embolism (PE) detection. In MIHT, the voxel intensity of a PE candidate region was transformed to an intensity homogeneity value (IHV) with respect to the local median intensity. The IHVs were calculated in multiscales (MIHVs) to measure the intensity homogeneity, taking into account vessels of different sizes and different degrees of occlusion. Seven new features including the entropy, gradient, and moments that characterized the intensity distributions of the candidate regions were derived from the MIHVs and combined with the previously designed features that described the shape and intensity of PE candidates for the training of a linear classifier to reduce the FPs. 59 CTPA PE cases were collected from our patient files (UM set) with IRB approval and 69 cases from the PIOPED II data set with access permission. 595 and 800 PEs were identified as reference standard by experienced thoracic radiologists in the UM and PIOPED set, respectively. FROC analysis was used for performance evaluation. Compared with our previous CAD system, at a test sensitivity of 80%, the new method reduced the FP rate from 18.9 to 14.1/scan for the PIOPED set when the classifier was trained with the UM set and from 22.6 to 16.0/scan vice versa. The improvement was statistically significant (p<0.05) by JAFROC analysis. This study demonstrated that the MIHT method is effective in reducing FPs and improving the performance of the CAD system.
Systematic review of skills transfer after surgical simulation-based training.
Dawe, S R; Pena, G N; Windsor, J A; Broeders, J A J L; Cregan, P C; Hewett, P J; Maddern, G J
2014-08-01
Simulation-based training assumes that skills are directly transferable to the patient-based setting, but few studies have correlated simulated performance with surgical performance. A systematic search strategy was undertaken to find studies published since the last systematic review, published in 2007. Inclusion of articles was determined using a predetermined protocol, independent assessment by two reviewers and a final consensus decision. Studies that reported on the use of surgical simulation-based training and assessed the transferability of the acquired skills to a patient-based setting were included. Twenty-seven randomized clinical trials and seven non-randomized comparative studies were included. Fourteen studies investigated laparoscopic procedures, 13 endoscopic procedures and seven other procedures. These studies provided strong evidence that participants who reached proficiency in simulation-based training performed better in the patient-based setting than their counterparts who did not have simulation-based training. Simulation-based training was equally as effective as patient-based training for colonoscopy, laparoscopic camera navigation and endoscopic sinus surgery in the patient-based setting. These studies strengthen the evidence that simulation-based training, as part of a structured programme and incorporating predetermined proficiency levels, results in skills transfer to the operative setting. © 2014 BJS Society Ltd. Published by John Wiley & Sons Ltd.
2013-01-01
Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733
Štirn, Igor; Carruthers, Jamie; Šibila, Marko; Pori, Primož
2017-02-01
In the present study, the effect of frequent, immediate, augmented feedback on the increase of throwing velocity was investigated. An increase of throwing velocity of a handball set shot when knowledge of results was provided or not provided during training was compared. Fifty female and seventy-three male physical education students were assigned randomly to the experimental or control group. All participants performed two series of ten set shots with maximal effort twice a week for six weeks. The experimental group received information regarding throwing velocity measured by a radar gun immediately after every shot, whereas the control group did not receive any feedback. Measurements of maximal throwing velocity of an ordinary handball and a heavy ball were performed, before and after the training period and compared. Participants who received feedback on results attained almost a four times greater relative increase of the velocity of the normal ball (size 2) as compared to the same intervention when feedback was not provided (8.1 ± 3.6 vs. 2.7 ± 2.9%). The velocity increases were smaller, but still significant between the groups for throws using the heavy ball (5.1 ± 4.2 and 2.5 ± 5.8 for the experimental and control group, respectively). Apart from the experimental group throwing the normal ball, no differences in velocity change for gender were obtained. The results confirmed that training oriented towards an increase in throwing velocity became significantly more effective when frequent knowledge of results was provided.
Hebisz, Rafal; Borkowski, Jacek; Zatoń, Marek
2016-01-01
Abstract The aim of this study was to determine differences in glycolytic metabolite concentrations and work output in response to an all-out interval training session in 23 cyclists with at least 2 years of interval training experience (E) and those inexperienced (IE) in this form of training. The intervention involved subsequent sets of maximal intensity exercise on a cycle ergometer. Each set comprised four 30 s repetitions interspersed with 90 s recovery periods; sets were repeated when blood pH returned to 7.3. Measurements of post-exercise hydrogen (H+) and lactate ion (LA-) concentrations and work output were taken. The experienced cyclists performed significantly more sets of maximal efforts than the inexperienced athletes (5.8 ± 1.2 vs. 4.3 ± 0.9 sets, respectively). Work output decreased in each subsequent set in the IE group and only in the last set in the E group. Distribution of power output changed only in the E group; power decreased in the initial repetitions of set only to increase in the final repetitions. H+ concentration decreased in the third, penultimate, and last sets in the E group and in each subsequent set in the IE group. LA- decreased in the last set in both groups. In conclusion, the experienced cyclists were able to repeatedly induce elevated levels of lactic acidosis. Power output distribution changed with decreased acid–base imbalance. In this way, this group could compensate for a decreased anaerobic metabolism. The above factors allowed cyclists experienced in interval training to perform more sets of maximal exercise without a decrease in power output compared with inexperienced cyclists. PMID:28149346
An accelerated training method for back propagation networks
NASA Technical Reports Server (NTRS)
Shelton, Robert O. (Inventor)
1993-01-01
The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny
2018-02-01
We propose a cross-domain, multi-task transfer learning framework to transfer knowledge learned from non-medical images by a deep convolutional neural network (DCNN) to medical image recognition task while improving the generalization by multi-task learning of auxiliary tasks. A first stage cross-domain transfer learning was initiated from ImageNet trained DCNN to mammography trained DCNN. 19,632 regions-of-interest (ROI) from 2,454 mass lesions were collected from two imaging modalities: digitized-screen film mammography (SFM) and full-field digital mammography (DM), and split into training and test sets. In the multi-task transfer learning, the DCNN learned the mass classification task simultaneously from the training set of SFM and DM. The best transfer network for mammography was selected from three transfer networks with different number of convolutional layers frozen. The performance of single-task and multitask transfer learning on an independent SFM test set in terms of the area under the receiver operating characteristic curve (AUC) was 0.78+/-0.02 and 0.82+/-0.02, respectively. In the second stage cross-domain transfer learning, a set of 12,680 ROIs from 317 mass lesions on DBT were split into validation and independent test sets. We first studied the data requirements for the first stage mammography trained DCNN by varying the mammography training data from 1% to 100% and evaluated its learning on the DBT validation set in inference mode. We found that the entire available mammography set provided the best generalization. The DBT validation set was then used to train only the last four fully connected layers, resulting in an AUC of 0.90+/-0.04 on the independent DBT test set.
Wallert, John; Tomasoni, Mattia; Madison, Guy; Held, Claes
2017-07-05
Machine learning algorithms hold potential for improved prediction of all-cause mortality in cardiovascular patients, yet have not previously been developed with high-quality population data. This study compared four popular machine learning algorithms trained on unselected, nation-wide population data from Sweden to solve the binary classification problem of predicting survival versus non-survival 2 years after first myocardial infarction (MI). This prospective national registry study for prognostic accuracy validation of predictive models used data from 51,943 complete first MI cases as registered during 6 years (2006-2011) in the national quality register SWEDEHEART/RIKS-HIA (90% coverage of all MIs in Sweden) with follow-up in the Cause of Death register (> 99% coverage). Primary outcome was AUROC (C-statistic) performance of each model on the untouched test set (40% of cases) after model development on the training set (60% of cases) with the full (39) predictor set. Model AUROCs were bootstrapped and compared, correcting the P-values for multiple comparisons with the Bonferroni method. Secondary outcomes were derived when varying sample size (1-100% of total) and predictor sets (39, 10, and 5) for each model. Analyses were repeated on 79,869 completed cases after multivariable imputation of predictors. A Support Vector Machine with a radial basis kernel developed on 39 predictors had the highest complete cases performance on the test set (AUROC = 0.845, PPV = 0.280, NPV = 0.966) outperforming Boosted C5.0 (0.845 vs. 0.841, P = 0.028) but not significantly higher than Logistic Regression or Random Forest. Models converged to the point of algorithm indifference with increased sample size and predictors. Using the top five predictors also produced good classifiers. Imputed analyses had slightly higher performance. Improved mortality prediction at hospital discharge after first MI is important for identifying high-risk individuals eligible for intensified treatment and care. All models performed accurately and similarly and because of the superior national coverage, the best model can potentially be used to better differentiate new patients, allowing for improved targeting of limited resources. Future research should focus on further model development and investigate possibilities for implementation.
Choice numeracy and physicians-in-training performance: the case of Medicare Part D
Hanoch, Yaniv; Miron-Shatz, Talya; Cole, Helen; Himmelstein, Mary; Federman, Alex D.
2017-01-01
In choosing a prescription plan, Medicare beneficiaries in the US usually face over 50 options. Many have turned to their physicians for help with this complex task. However, exactly how well do physicians navigate information on Part D plans is still an open question. In this study, we explored this unanswered question by examining the effect of choice-set size and numeracy levels on a physician-in-training’s ability to choose appropriate Medicare drug plans. Consistent with our hypotheses, increases in choice sets correlated significantly with fewer correct answers, and higher numeracy levels were associated with more correct answers. Hence, our data further highlight the role of numeracy in financial- and health-related decision making, and also raise concerns about physicians’ ability to help patients choose the optimal Part D plan. PMID:20658834
Computer-Aided Diagnosis Of Leukemic Blood Cells
NASA Astrophysics Data System (ADS)
Gunter, U.; Harms, H.; Haucke, M.; Aus, H. M.; ter Meulen, V.
1982-11-01
In a first clinical test, computer programs are being used to diagnose leukemias. The data collected include blood samples from patients suffering from acute myelomonocytic-, acute monocytic- and acute promyelocytic, myeloblastic, prolymphocytic, chronic lymphocytic leukemias and leukemic transformed immunocytoma. The proper differentiation of the leukemic cells is essential because the therapy depends on the type of leukemia. The algorithms analyse the fine chromatin texture and distribution in the nuclei as well as size and shape parameters from the cells and nuclei. Cells with similar nuclei from different leukemias can be distinguished from each other by analyzing the cell cytoplasm images. Recognition of these subtle differences in the cells require an image sampling rate of 15-30 pixel/micron. The results for the entire data set correlate directly to established hematological parameters and support the previously published initial training set .
NASA Astrophysics Data System (ADS)
Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.
2018-07-01
We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.
NASA Astrophysics Data System (ADS)
Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.
2018-04-01
We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.
Data Programming: Creating Large Training Sets, Quickly.
Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher
2016-12-01
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions , which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can "denoise" the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.
Data Programming: Creating Large Training Sets, Quickly
Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher
2018-01-01
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can “denoise” the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable. PMID:29872252
Task Analysis of Tactical Leadership Skills for Bradley Infantry Fighting Vehicle Leaders
1986-10-01
The Bradley Leader Trainer is conceptualized as a device or set of de - vices that can be used to teach Bradley leaders to perform their full set of...experts. The task list was examined to de - termine critical training requirements, requirements for training device sup- port of this training, and...Functions/ j ITask | |Task | |Task | [Training j , To Further De - | ;Critical Train- | iTninir
NASA Astrophysics Data System (ADS)
Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon
2015-04-01
Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the remaining data. As a new SOM is created up to 30 times for each value under scrutiny, it forms the basis for a stochastic framework from which residuals are used to evaluate error statistics and model bias. The proposed method is suitable to estimate soil geotechnical properties, revealing and quantifying relationships between geotechnical variables and particle distribution size, not properly observed by linear multivariate statistical approaches.
Henricks, Walter H; Karcher, Donald S; Harrison, James H; Sinard, John H; Riben, Michael W; Boyer, Philip J; Plath, Sue; Thompson, Arlene; Pantanowitz, Liron
2017-01-01
-Recognition of the importance of informatics to the practice of pathology has surged. Training residents in pathology informatics has been a daunting task for most residency programs in the United States because faculty often lacks experience and training resources. Nevertheless, developing resident competence in informatics is essential for the future of pathology as a specialty. -To develop and deliver a pathology informatics curriculum and instructional framework that guides pathology residency programs in training residents in critical pathology informatics knowledge and skills, and meets Accreditation Council for Graduate Medical Education Informatics Milestones. -The College of American Pathologists, Association of Pathology Chairs, and Association for Pathology Informatics formed a partnership and expert work group to identify critical pathology informatics training outcomes and to create a highly adaptable curriculum and instructional approach, supported by a multiyear change management strategy. -Pathology Informatics Essentials for Residents (PIER) is a rigorous approach for educating all pathology residents in important pathology informatics knowledge and skills. PIER includes an instructional resource guide and toolkit for incorporating informatics training into residency programs that vary in needs, size, settings, and resources. PIER is available at http://www.apcprods.org/PIER (accessed April 6, 2016). -PIER is an important contribution to informatics training in pathology residency programs. PIER introduces pathology trainees to broadly useful informatics concepts and tools that are relevant to practice. PIER provides residency program directors with a means to implement a standardized informatics training curriculum, to adapt the approach to local program needs, and to evaluate resident performance and progress over time.
Resistance training using eccentric overload induces early adaptations in skeletal muscle size.
Norrbrand, Lena; Fluckey, James D; Pozzo, Marco; Tesch, Per A
2008-02-01
Fifteen healthy men performed a 5-week training program comprising four sets of seven unilateral, coupled concentric-eccentric knee extensions 2-3 times weekly. While eight men were assigned to training using a weight stack (WS) machine, seven men trained using a flywheel (FW) device, which inherently provides variable resistance and allows for eccentric overload. The design of these apparatuses ensured similar knee extensor muscle use and range of motion. Before and after training, maximal isometric force (MVC) was measured in tasks non-specific to the training modes. Volume of all individual quadriceps muscles was determined by magnetic resonance imaging. Performance across the 12 exercise sessions was measured using the inherent features of the devices. Whereas MVC increased (P < 0.05) at all angles measured in FW, such a change was less consistent in WS. There was a marked increase (P < 0.05) in task-specific performance (i.e., load lifted) in WS. Average work showed a non-significant 8.7% increase in FW. Quadriceps muscle volume increased (P < 0.025) in both groups after training. Although the more than twofold greater hypertrophy evident in FW (6.2%) was not statistically greater than that shown in WS (3.0%), all four individual quadriceps muscles of FW showed increased (P < 0.025) volume whereas in WS only m. rectus femoris was increased (P < 0.025). Collectively the results of this study suggest more robust muscular adaptations following flywheel than weight stack resistance exercise supporting the idea that eccentric overload offers a potent stimuli essential to optimize the benefits of resistance exercise.
Principles to Consider in Defining New Directions in Internal Medicine Training and Certification
Turner, Barbara J; Centor, Robert M; Rosenthal, Gary E
2006-01-01
SGIM endoreses seven principles related to current thinking about internal medicine training: 1) internal medicine requires a full three years of residency training before subspecialization; 2) internal medicine residency programs must dramatically increase support for training in the ambulatory setting and offer equivalent opportunities for training in both inpatient and outpatient medicine; 3) in settings where adequate support and time are devoted to ambulatory training, the third year of residency could offer an opportunity to develop further expertise or mastery in a specific type or setting of care; 4) further certification in specific specialties within internal medicine requires the completion of an approved fellowship program; 5) areas of mastery in internal medicine can be demonstrated through modified board certification and recertification examinations; 6) certification processes throughout internal medicine should focus increasingly on demonstration of clinical competence through adherence to validated standards of care within and across practice settings; and 7) regardless of the setting in which General Internists practice, we should unite to promote the critical role that this specialty serves in patient care. PMID:16637826
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2001-01-01
Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.
Less is more: Sampling chemical space with active learning
NASA Astrophysics Data System (ADS)
Smith, Justin S.; Nebgen, Ben; Lubbers, Nicholas; Isayev, Olexandr; Roitberg, Adrian E.
2018-06-01
The development of accurate and transferable machine learning (ML) potentials for predicting molecular energetics is a challenging task. The process of data generation to train such ML potentials is a task neither well understood nor researched in detail. In this work, we present a fully automated approach for the generation of datasets with the intent of training universal ML potentials. It is based on the concept of active learning (AL) via Query by Committee (QBC), which uses the disagreement between an ensemble of ML potentials to infer the reliability of the ensemble's prediction. QBC allows the presented AL algorithm to automatically sample regions of chemical space where the ML potential fails to accurately predict the potential energy. AL improves the overall fitness of ANAKIN-ME (ANI) deep learning potentials in rigorous test cases by mitigating human biases in deciding what new training data to use. AL also reduces the training set size to a fraction of the data required when using naive random sampling techniques. To provide validation of our AL approach, we develop the COmprehensive Machine-learning Potential (COMP6) benchmark (publicly available on GitHub) which contains a diverse set of organic molecules. Active learning-based ANI potentials outperform the original random sampled ANI-1 potential with only 10% of the data, while the final active learning-based model vastly outperforms ANI-1 on the COMP6 benchmark after training to only 25% of the data. Finally, we show that our proposed AL technique develops a universal ANI potential (ANI-1x) that provides accurate energy and force predictions on the entire COMP6 benchmark. This universal ML potential achieves a level of accuracy on par with the best ML potentials for single molecules or materials, while remaining applicable to the general class of organic molecules composed of the elements CHNO.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training
Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134
Giaquinto, Marcus
2017-02-19
How can we acquire a grasp of cardinal numbers, even the first very small positive cardinal numbers, given that they are abstract mathematical entities? That problem of cognitive access is the main focus of this paper. All the major rival views about the nature and existence of cardinal numbers face difficulties; and the view most consonant with our normal thought and talk about numbers, the view that cardinal numbers are sizes of sets, runs into the cognitive access problem. The source of the problem is the plausible assumption that cognitive access to something requires causal contact with it. It is argued that this assumption is in fact wrong, and that in this and similar cases, we should accept that a certain recognize-and-distinguish capacity is sufficient for cognitive access. We can then go on to solve the cognitive access problem, and thereby support the set-size view of cardinal numbers, by paying attention to empirical findings about basic number abilities. To this end, some selected studies of infants, pre-school children and a trained chimpanzee are briefly discussed.This article is part of a discussion meeting issue 'The origins of numerical abilities'. © 2017 The Author(s).
Higher-Order Neural Networks Applied to 2D and 3D Object Recognition
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1994-01-01
A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.
Image quality assessment using deep convolutional networks
NASA Astrophysics Data System (ADS)
Li, Yezhou; Ye, Xiang; Li, Yong
2017-12-01
This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.
Organizational Correlates of Management Training Interests.
ERIC Educational Resources Information Center
Tills, Marvin
A study was made of a sample of Wisconsin manufacturing firms and a subsample of firms in different size categories to determine organizational correlates of management training interests. Correlations were sought between characteristics of firms (ownership, relationship to parent company, size of employment, market orientation, growth trends,…
Space Shuttle inflatable training articles
NASA Technical Reports Server (NTRS)
West, M. L.
1984-01-01
The design, development, construction, and testing of the Long Duration Exposure Facility inflatable and the space telescope training articles are discussed. While these articles are of similar nature, materials, and construction, they vary in size and present different problems with regards to size, shape, gross/net lift, and balance.
[Sarcopenia intervention with progressive resistance training and protein nutritional supplements].
Palop Montoro, M Victoria; Párraga Montilla, Juan Antonio; Lozano Aguilera, Emilio; Arteaga Checa, Milagros
2015-04-01
Aging is accompanied by changes in body composition among which is a progressive reduction in muscle mass, which may contribute to the development of functional limitations in older people, and where the lifestyle plays a particularly important role. To test the effectiveness of progressive resistance training, protein nutritional supplements and both interventions combined in the treatment of sarcopenia. Review of literature in Medline, ScienceDirect, CINAHL, ISI WOK and PEDro data by combining the descriptors of Medical Subject Headings (MeSH) concerning sarcopenia, progressive resistance training, protein supplements and seniors. A total of 147 studies were found which resistance exercise performed by sessions 45-60 minutes, 2-3 times a week, and 3-4 sets of 8 repetitions, to an increasing intensity. This exercise resulted in increased muscle mass and strength, and increased skeletal muscle protein synthesis and muscle fiber size. Nutritional supplements such as beta-hydroxy-beta-methylbutyrate, leucine and essential amino acids produced gains in muscle mass. All supplements increased strength, especially when combined with resistance exercise. The combination of progressive resistance training and protein included in the diet, either in the form of nutritional supplements, strengthens the impact that each of these interventions can have on the treatment of sarcopenia in the elderly. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Common constraints limit Korean and English character recognition in peripheral vision.
He, Yingchen; Kwon, MiYoung; Legge, Gordon E
2018-01-01
The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters. Training (1.5 hours × 4 days) consisted of repetitive visual-span measurements for Korean trigrams (strings of three characters). Our training enlarged the visual spans for Korean single characters and trigrams, and the benefit transferred to untrained English symbols. The improvement was largely due to a reduction of within-character and between-character crowding in Korean recognition, as well as between-letter crowding in English recognition. We also found a negative correlation between the size of the visual span and the average pattern complexity of the symbol set. Together, our results showed that the visual span is limited by common sensory (crowding) and physical (pattern complexity) factors regardless of the language script, providing evidence that the visual span reflects a universal bottleneck for text recognition.
Common constraints limit Korean and English character recognition in peripheral vision
He, Yingchen; Kwon, MiYoung; Legge, Gordon E.
2018-01-01
The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters. Training (1.5 hours × 4 days) consisted of repetitive visual-span measurements for Korean trigrams (strings of three characters). Our training enlarged the visual spans for Korean single characters and trigrams, and the benefit transferred to untrained English symbols. The improvement was largely due to a reduction of within-character and between-character crowding in Korean recognition, as well as between-letter crowding in English recognition. We also found a negative correlation between the size of the visual span and the average pattern complexity of the symbol set. Together, our results showed that the visual span is limited by common sensory (crowding) and physical (pattern complexity) factors regardless of the language script, providing evidence that the visual span reflects a universal bottleneck for text recognition. PMID:29327041
An open trial of a comprehensive anger treatment program on an outpatient sample.
Fuller, J Ryan; Digiuseppe, Raymond; O'Leary, Siobhan; Fountain, Tina; Lang, Colleen
2010-07-01
This pilot study was designed to investigate the efficacy of a cognitive behavioral treatment for anger. Twelve (5 men and 7 women) outpatient adults completed 2-hour group sessions for 16 sessions. Participants were diagnosed with 29 Axis I and 34 Axis II disorders with high rates of comorbidity. Empirically supported techniques of skills training, cognitive restructuring, and relaxation were utilized. In this protocol, cognitive restructuring emphasized the use of the ABC model to understand anger episodes and the Rational Emotive Behavior Therapy (REBT) techniques of disputing irrational beliefs and rehearsing rational coping statements, but additional cognitive techniques were used, e.g. self-instructional training (SIT). Skills training included problem-solving and assertiveness. Relaxation training was paced respiration. Motivational interviewing, imaginal exposure with coping, and relapse prevention were also included. Significant improvements were found from pre- to post-treatment on the following measures: the Trait Anger Scale of the State-Trait Anger Expression Inventory-II; and Anger Disorder Scale total scores; idiosyncratic anger measurements of situational intensity and symptom severity; and the Beck Depression Inventory-II. In order to extend the significant research findings of this pilot study, future investigations should involve larger sample sizes, populations drawn from various settings, and contact control groups.
Cognitive rehabilitation in schizophrenia: a quantitative analysis of controlled studies.
Krabbendam, Lydia; Aleman, André
2003-09-01
Cognitive rehabilitation is now recognized as an important tool in the treatment of schizophrenia, and findings in this area are emerging rapidly. There is a need for a systematic review of the effects of the different training programs. To review quantitatively the controlled studies on cognitive rehabilitation in schizophrenia for the effect of training on performance on tasks other than those practiced in the training procedure. A meta-analysis was conducted on 12 controlled studies of cognitive rehabilitation in schizophrenia taking into account the effects of type of rehabilitation approach (rehearsal or strategy learning) and duration of training. The mean weighted effect size was 0.45, with a 95% confidence interval from 0.26 to 0.64. Effect sizes differed slightly, depending on rehabilitation approach, in favor of strategy learning, but this difference did not reach statistical significance. Duration of training did not influence effect size. Cognitive rehabilitation can improve task performance in patients with schizophrenia and this effect is apparent on tasks outside those practiced during the training procedure. Future studies should include more real-world outcomes and perform longitudinal evaluations.
Comparison of molecular breeding values based on within- and across-breed training in beef cattle
2013-01-01
Background Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Methods Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. Results With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Conclusions Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set. PMID:23953034
Target discrimination method for SAR images based on semisupervised co-training
NASA Astrophysics Data System (ADS)
Wang, Yan; Du, Lan; Dai, Hui
2018-01-01
Synthetic aperture radar (SAR) target discrimination is usually performed in a supervised manner. However, supervised methods for SAR target discrimination may need lots of labeled training samples, whose acquirement is costly, time consuming, and sometimes impossible. This paper proposes an SAR target discrimination method based on semisupervised co-training, which utilizes a limited number of labeled samples and an abundant number of unlabeled samples. First, Lincoln features, widely used in SAR target discrimination, are extracted from the training samples and partitioned into two sets according to their physical meanings. Second, two support vector machine classifiers are iteratively co-trained with the extracted two feature sets based on the co-training algorithm. Finally, the trained classifiers are exploited to classify the test data. The experimental results on real SAR images data not only validate the effectiveness of the proposed method compared with the traditional supervised methods, but also demonstrate the superiority of co-training over self-training, which only uses one feature set.
Sample Selection for Training Cascade Detectors.
Vállez, Noelia; Deniz, Oscar; Bueno, Gloria
2015-01-01
Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.
Belief state representation in the dopamine system.
Babayan, Benedicte M; Uchida, Naoshige; Gershman, Samuel J
2018-05-14
Learning to predict future outcomes is critical for driving appropriate behaviors. Reinforcement learning (RL) models have successfully accounted for such learning, relying on reward prediction errors (RPEs) signaled by midbrain dopamine neurons. It has been proposed that when sensory data provide only ambiguous information about which state an animal is in, it can predict reward based on a set of probabilities assigned to hypothetical states (called the belief state). Here we examine how dopamine RPEs and subsequent learning are regulated under state uncertainty. Mice are first trained in a task with two potential states defined by different reward amounts. During testing, intermediate-sized rewards are given in rare trials. Dopamine activity is a non-monotonic function of reward size, consistent with RL models operating on belief states. Furthermore, the magnitude of dopamine responses quantitatively predicts changes in behavior. These results establish the critical role of state inference in RL.
Hermassi, Souhail; Chelly, Mohamed Souhaiel; Fieseler, Georg; Bartels, Thomas; Schulze, Stephan; Delank, Karl-Stefan; Shephard, Roy J; Schwesig, René
2017-09-01
Background Team handball is an intense ball sport with specific requirements on technical skills, tactical understanding, and physical performance. The ability of handball players to develop explosive efforts (e. g. sprinting, jumping, changing direction) is crucial to success. Objective The purpose of this pilot study was to examine the effects of an in-season high-intensity strength training program on the physical performance of elite handball players. Materials and methods Twenty-two handball players (a single national-level Tunisian team) were randomly assigned to a control group (CG; n = 10) or a training group (TG; n = 12). At the beginning of the pilot study, all subjects performed a battery of motor tests: one repetition maximum (1-RM) half-squat test, a repeated sprint test [6 × (2 × 15 m) shuttle sprints], squat jumps, counter movement jumps (CMJ), and the Yo-Yo intermittent recovery test level 1. The TG additionally performed a maximal leg strength program twice a week for 10 weeks immediately before engaging in regular handball training. Each strength training session included half-squat exercises to strengthen the lower limbs (80 - 95 % of 1-RM, 1 - 3 repetitions, 3 - 6 sets, 3 - 4 min rest between sets). The control group underwent no additional strength training. The motor test battery was repeated at the end of the study interventions. Results In the TG, 3 parameters (maximal strength of lower limb: η² = 0.74; CMJ: η² = 0.70, and RSA best time: η² = 0.25) showed significant improvements, with large effect sizes (e. g. CMJ: d = 3.77). A reduction in performance for these same 3 parameters was observed in the CG (d = -0.24). Conclusions The results support our hypothesis that additional strength training twice a week enhances the maximal strength of the lower limbs and jumping or repeated sprinting performance. There was no evidence of shuttle sprints ahead of regular training compromising players' speed and endurance capacities. © Georg Thieme Verlag KG Stuttgart · New York.
DMT-TAFM: a data mining tool for technical analysis of futures market
NASA Astrophysics Data System (ADS)
Stepanov, Vladimir; Sathaye, Archana
2002-03-01
Technical analysis of financial markets describes many patterns of market behavior. For practical use, all these descriptions need to be adjusted for each particular trading session. In this paper, we develop a data mining tool for technical analysis of the futures markets (DMT-TAFM), which dynamically generates rules based on the notion of the price pattern similarity. The tool consists of three main components. The first component provides visualization of data series on a chart with different ranges, scales, and chart sizes and types. The second component constructs pattern descriptions using sets of polynomials. The third component specifies the training set for mining, defines the similarity notion, and searches for a set of similar patterns. DMT-TAFM is useful to prepare the data, and then reveal and systemize statistical information about similar patterns found in any type of historical price series. We performed experiments with our tool on three decades of trading data fro hundred types of futures. Our results for this data set shows that, we can prove or disprove many well-known patterns based on real data, as well as reveal new ones, and use the set of relatively consistent patterns found during data mining for developing better futures trading strategies.
Ling, C Y M; Mak, W W S
2012-03-01
The present study examined the effectiveness of three staff training elements: psychoeducation (PE) on autism, introduction of functional behavioural analysis (FBA) and emotional management (EM), on the reaction of challenging behaviours for frontline staff towards children with autism in Hong Kong special education settings. A sample of 311 frontline staff in educational settings was recruited to one of the three conditions: control, PE-FBA and PE-FBA-EM groups. A total of 175 participants completed all three sets of questionnaires during pre-training, immediate post-training and 1-month follow-up. Findings showed that the one-session staff training workshop increased staff knowledge of autism and perceived efficacy but decrease helping behavioural intention. In spite of the limited effectiveness of a one-session staff training workshop, continued staff training is still necessary for the improvement of service quality. Further exploration on how to change emotion response of staff is important. © 2011 The Authors. Journal of Intellectual Disability Research © 2011 Blackwell Publishing Ltd.
Wood crib fire free burning test in ISO room
NASA Astrophysics Data System (ADS)
Qiang, Xu; Griffin, Greg; Bradbury, Glenn; Dowling, Vince
2006-04-01
In the research of application potential of water mist fire suppression system for fire fighting in train luggage carriage, a series of experiments were conducted in ISO room on wood crib fire with and without water mist actuation. The results of free burn test without water mist suppression are used as reference in evaluating the efficiency of water mist suppression system. As part of the free burn test, several tests have been done under the hood of ISO room to calibrate the size of the crib fire and these tests can also be used in analyzing the wall effect in room fire hazard. In these free burning experiments, wood cribs of four sizes under the hood were tested. The temperature of crib fire, heat flux around the fire, gas concentration in hood of ISO room were measured in the experiments and two sets of thermal imaging system were used to get the temperature distribution and the typical shape of the free burning flames. From the experiments, the radiation intensity in specific positions around the fire, the effective heat of combustion, mass loss, oxygen consumption rate for different sizes of fire, typical structure of the flame and self extinguishment time was obtained for each crib size.
NASA Astrophysics Data System (ADS)
Li, Manchun; Ma, Lei; Blaschke, Thomas; Cheng, Liang; Tiede, Dirk
2016-07-01
Geographic Object-Based Image Analysis (GEOBIA) is becoming more prevalent in remote sensing classification, especially for high-resolution imagery. Many supervised classification approaches are applied to objects rather than pixels, and several studies have been conducted to evaluate the performance of such supervised classification techniques in GEOBIA. However, these studies did not systematically investigate all relevant factors affecting the classification (segmentation scale, training set size, feature selection and mixed objects). In this study, statistical methods and visual inspection were used to compare these factors systematically in two agricultural case studies in China. The results indicate that Random Forest (RF) and Support Vector Machines (SVM) are highly suitable for GEOBIA classifications in agricultural areas and confirm the expected general tendency, namely that the overall accuracies decline with increasing segmentation scale. All other investigated methods except for RF and SVM are more prone to obtain a lower accuracy due to the broken objects at fine scales. In contrast to some previous studies, the RF classifiers yielded the best results and the k-nearest neighbor classifier were the worst results, in most cases. Likewise, the RF and Decision Tree classifiers are the most robust with or without feature selection. The results of training sample analyses indicated that the RF and adaboost. M1 possess a superior generalization capability, except when dealing with small training sample sizes. Furthermore, the classification accuracies were directly related to the homogeneity/heterogeneity of the segmented objects for all classifiers. Finally, it was suggested that RF should be considered in most cases for agricultural mapping.
Spillane, Mike; Schwarz, Neil; Willoughby, Darryn S
2014-12-01
This study determined the effects of heavy resistance training and peri-exercise ergogenic multi-ingredient nutritional supplement ingestion on blood and skeletal markers of muscle protein synthesis (MPS), body composition, and muscle performance. Twenty-four college-age males were randomly assigned to either a multi-ingredient SizeOn Maximum Performance (SIZE) or protein/carbohydrate/creatine (PCC) comparator supplement group in a double-blind fashion. Body composition and muscle performance were assessed, and venous blood samples and muscle biopsies were obtained before and after 6 weeks of resistance training and supplementation. Data were analyzed by 2-way ANOVA (p ≤ 0.05). Total body mass, body water, and fat mass were not differentially affected (p > 0.05). However, fat-free mass was significantly increased in both groups in response to resistance training (p = 0.037). Lower-body muscle strength (p = 0.029) and endurance (p = 0.027) were significantly increased with resistance training, but not supplementation (p > 0.05). Serum insulin, IGF-1, GH, and cortisol were not differentially affected (p > 0.05). Muscle creatine content was significantly increased in both groups from supplementation (p = 0.044). Total muscle protein (p = 0.038), MHC 1 (p = 0.041), MHC 2A, (p = 0.029), total IRS- (p = 0.041), and total Akt (p = 0.011) were increased from resistance training, but not supplementation. In response to heavy resistance training when compared to PCC, the peri-exercise ingestion of SIZE did not preferentially improve body composition, muscle performance, and markers indicative of MPS. Key pointsIn response to 42 days of heavy resistance training and either SizeOn Maximum Performance or protein/carbohydrate/creatine supplementation, similar increases in muscle mass and strength in both groups occurred; however, the increases were not different between supplement groups.The supplementation of SizeOn Maximum Performance had no preferential effect on augmenting serum insulin, IGF-1, and GH, or in decreasing cortisol.While resistance training was effective in increasing total creatine content in skeletal muscle, myofibrillar protein, and the content of total IRS-1 and Akt, it was not preferentially due to SizeOn Maximum Performance supplementation.At the daily dose of 50 g, SizeOn Maximum Performance supplementation for 42 days combined with resistance training does not increases muscle mass and strength due to its ability to elevate serum hormones and growth factors or in its ability to augment skeletal muscle signaling pathway markers indicative of muscle protein synthesis when compared to an equivalent daily dose of protein/carbohydrate/creatine.
Falcone, John L; Middleton, Donald B
2013-01-01
The Accreditation Council for Graduate Medical Education (ACGME) sets residency performance standards for the American Board of Family Medicine Certification Examination. This study aims are to describe the compliance of residency programs with ACGME standards and to determine whether residency pass rates depend on program size and location. In this retrospective cohort study, residency performance from 2007 to 2011 was compared with the ACGME performance standards. Simple linear regression was performed to see whether program pass rates were dependent on program size. Regional differences in performance were compared with χ(2) tests, using an α level of 0.05. Of 429 total residency programs, there were 205 (47.8%) that violate ACGME performance standards. Linear regression showed that program pass rates were positively correlated and dependent on program size (P < .001). The median pass rate per state was 86.4% (interquartile range, 82.0-90.8. χ(2) Tests showed that states in the West performed higher than the other 3 US Census Bureau Regions (all P < .001). Approximately half of the family medicine training programs do not meet the ACGME examination performance standards. Pass rates are associated with residency program size, and regional variation occurs. These findings have the potential to affect ACGME policy and residency program application patterns.
Effects of training set selection on pain recognition via facial expressions
NASA Astrophysics Data System (ADS)
Shier, Warren A.; Yanushkevich, Svetlana N.
2016-07-01
This paper presents an approach to pain expression classification based on Gabor energy filters with Support Vector Machines (SVMs), followed by analyzing the effects of training set variations on the systems classification rate. This approach is tested on the UNBC-McMaster Shoulder Pain Archive, which consists of spontaneous pain images, hand labelled using the Prkachin and Solomon Pain Intensity scale. In this paper, the subjects pain intensity level has been quantized into three disjoint groups: no pain, weak pain and strong pain. The results of experiments show that Gabor energy filters with SVMs provide comparable or better results to previous filter- based pain recognition methods, with precision rates of 74%, 30% and 78% for no pain, weak pain and strong pain, respectively. The study of effects of intra-class skew, or changing the number of images per subject, show that both completely removing and over-representing poor quality subjects in the training set has little effect on the overall accuracy of the system. This result suggests that poor quality subjects could be removed from the training set to save offline training time and that SVM is robust not only to outliers in training data, but also to significant amounts of poor quality data mixed into the training sets.
International standards for programmes of training in intensive care medicine in Europe.
2011-03-01
To develop internationally harmonised standards for programmes of training in intensive care medicine (ICM). Standards were developed by using consensus techniques. A nine-member nominal group of European intensive care experts developed a preliminary set of standards. These were revised and refined through a modified Delphi process involving 28 European national coordinators representing national training organisations using a combination of moderated discussion meetings, email, and a Web-based tool for determining the level of agreement with each proposed standard, and whether the standard could be achieved in the respondent's country. The nominal group developed an initial set of 52 possible standards which underwent four iterations to achieve maximal consensus. All national coordinators approved a final set of 29 standards in four domains: training centres, training programmes, selection of trainees, and trainers' profiles. Only three standards were considered immediately achievable by all countries, demonstrating a willingness to aspire to quality rather than merely setting a minimum level. Nine proposed standards which did not achieve full consensus were identified as potential candidates for future review. This preliminary set of clearly defined and agreed standards provides a transparent framework for assuring the quality of training programmes, and a foundation for international harmonisation and quality improvement of training in ICM.
Lai, Agnes Y.; Mui, Moses W.; Wan, Alice; Stewart, Sunita M.; Yew, Carol; Lam, Tai-hing; Chan, Sophia S.
2016-01-01
Evidence-based practice and capacity-building approaches are essential for large-scale health promotion interventions. However, there are few models in the literature to guide and evaluate training of social service workers in community settings. This paper presents the development and evaluation of the “train-the-trainer” workshop (TTT) for the first large scale, community-based, family intervention projects, entitled “Happy Family Kitchen Project” (HFK) under the FAMILY project, a Hong Kong Jockey Club Initiative for a Harmonious Society. The workshop aimed to enhance social workers’ competence and performance in applying positive psychology constructs in their family interventions under HFK to improve family well-being of the community they served. The two-day TTT was developed and implemented by a multidisciplinary team in partnership with community agencies to 50 social workers (64% women). It focused on the enhancement of knowledge, attitude, and practice of five specific positive psychology themes, which were the basis for the subsequent development of the 23 family interventions for 1419 participants. Acceptability and applicability were enhanced by completing a needs assessment prior to the training. The TTT was evaluated by trainees’ reactions to the training content and design, changes in learners (trainees) and benefits to the service organizations. Focus group interviews to evaluate the workshop at three months after the training, and questionnaire survey at pre-training, immediately after, six months, one year and two years after training were conducted. There were statistically significant increases with large to moderate effect size in perceived knowledge, self-efficacy and practice after training, which sustained to 2-year follow-up. Furthermore, there were statistically significant improvements in family communication and well-being of the participants in the HFK interventions they implemented after training. This paper offers a practical example of development, implementation and model-based evaluation of training programs, which may be helpful to others seeking to develop such programs in diverse communities. PMID:26808541
Lai, Agnes Y; Mui, Moses W; Wan, Alice; Stewart, Sunita M; Yew, Carol; Lam, Tai-Hing; Chan, Sophia S
2016-01-01
Evidence-based practice and capacity-building approaches are essential for large-scale health promotion interventions. However, there are few models in the literature to guide and evaluate training of social service workers in community settings. This paper presents the development and evaluation of the "train-the-trainer" workshop (TTT) for the first large scale, community-based, family intervention projects, entitled "Happy Family Kitchen Project" (HFK) under the FAMILY project, a Hong Kong Jockey Club Initiative for a Harmonious Society. The workshop aimed to enhance social workers' competence and performance in applying positive psychology constructs in their family interventions under HFK to improve family well-being of the community they served. The two-day TTT was developed and implemented by a multidisciplinary team in partnership with community agencies to 50 social workers (64% women). It focused on the enhancement of knowledge, attitude, and practice of five specific positive psychology themes, which were the basis for the subsequent development of the 23 family interventions for 1419 participants. Acceptability and applicability were enhanced by completing a needs assessment prior to the training. The TTT was evaluated by trainees' reactions to the training content and design, changes in learners (trainees) and benefits to the service organizations. Focus group interviews to evaluate the workshop at three months after the training, and questionnaire survey at pre-training, immediately after, six months, one year and two years after training were conducted. There were statistically significant increases with large to moderate effect size in perceived knowledge, self-efficacy and practice after training, which sustained to 2-year follow-up. Furthermore, there were statistically significant improvements in family communication and well-being of the participants in the HFK interventions they implemented after training. This paper offers a practical example of development, implementation and model-based evaluation of training programs, which may be helpful to others seeking to develop such programs in diverse communities.
Acute effects of verbal feedback on upper-body performance in elite athletes.
Argus, Christos K; Gill, Nicholas D; Keogh, Justin Wl; Hopkins, Will G
2011-12-01
Argus, CK, Gill, ND, Keogh, JWL, and Hopkins, WG. Acute effects of verbal feedback on upper-body performance in elite athletes. J Strength Cond Res 25(12): 3282-3287, 2011-Improved training quality has the potential to enhance training adaptations. Previous research suggests that receiving feedback improves single-effort maximal strength and power tasks, but whether quality of a training session with repeated efforts can be improved remains unclear. The purpose of this investigation was to determine the effects of verbal feedback on upper-body performance in a resistance training session consisting of multiple sets and repetitions in well-trained athletes. Nine elite rugby union athletes were assessed using the bench throw exercise on 4 separate occasions each separated by 7 days. Each athlete completed 2 sessions consisting of 3 sets of 4 repetitions of the bench throw with feedback provided after each repetition and 2 identical sessions where no feedback was provided after each repetition. When feedback was received, there was a small increase of 1.8% (90% confidence limits, ±2.7%) and 1.3% (±0.7%) in mean peak power and velocity when averaged over the 3 sets. When individual sets were compared, there was a tendency toward the improvements in mean peak power being greater in the second and third sets. These results indicate that providing verbal feedback produced acute improvements in upper-body power output of well-trained athletes. The benefits of feedback may be greatest in the latter sets of training and could improve training quality and result in greater long-term adaptation.
A DMA-train for precision measurement of sub-10 nm aerosol dynamics
NASA Astrophysics Data System (ADS)
Stolzenburg, Dominik; Steiner, Gerhard; Winkler, Paul M.
2017-05-01
Measurements of aerosol dynamics in the sub-10 nm size range are crucially important for quantifying the impact of new particle formation onto the global budget of cloud condensation nuclei. Here we present the development and characterization of a differential mobility analyzer train (DMA-train), operating six DMAs in parallel for high-time-resolution particle-size-distribution measurements below 10 nm. The DMAs are operated at six different but fixed voltages and hence sizes, together with six state-of-the-art condensation particle counters (CPCs). Two Airmodus A10 particle size magnifiers (PSM) are used for channels below 2.5 nm while sizes above 2.5 nm are detected by TSI 3776 butanol-based or TSI 3788 water-based CPCs. We report the transfer functions and characteristics of six identical Grimm S-DMAs as well as the calibration of a butanol-based TSI model 3776 CPC, a water-based TSI model 3788 CPC and an Airmodus A10 PSM. We find cutoff diameters similar to those reported in the literature. The performance of the DMA-train is tested with a rapidly changing aerosol of a tungsten oxide particle generator during warmup. Additionally we report a measurement of new particle formation taken during a nucleation event in the CLOUD chamber experiment at CERN. We find that the DMA-train is able to bridge the gap between currently well-established measurement techniques in the cluster-particle transition regime, providing high time resolution and accurate size information of neutral and charged particles even at atmospheric particle concentrations.
Effect of creatine supplementation and drop-set resistance training in untrained aging adults.
Johannsmeyer, Sarah; Candow, Darren G; Brahms, C Markus; Michel, Deborah; Zello, Gordon A
2016-10-01
To investigate the effects of creatine supplementation and drop-set resistance training in untrained aging adults. Participants were randomized to one of two groups: Creatine (CR: n=14, 7 females, 7 males; 58.0±3.0yrs, 0.1g/kg/day of creatine+0.1g/kg/day of maltodextrin) or Placebo (PLA: n=17, 7 females, 10 males; age: 57.6±5.0yrs, 0.2g/kg/day of maltodextrin) during 12weeks of drop-set resistance training (3days/week; 2 sets of leg press, chest press, hack squat and lat pull-down exercises performed to muscle fatigue at 80% baseline 1-repetition maximum [1-RM] immediately followed by repetitions to muscle fatigue at 30% baseline 1-RM). Prior to and following training and supplementation, assessments were made for body composition, muscle strength, muscle endurance, tasks of functionality, muscle protein catabolism and diet. Drop-set resistance training improved muscle mass, muscle strength, muscle endurance and tasks of functionality (p<0.05). The addition of creatine to drop-set resistance training significantly increased body mass (p=0.002) and muscle mass (p=0.007) compared to placebo. Males on creatine increased muscle strength (lat pull-down only) to a greater extent than females on creatine (p=0.005). Creatine enabled males to resistance train at a greater capacity over time compared to males on placebo (p=0.049) and females on creatine (p=0.012). Males on creatine (p=0.019) and females on placebo (p=0.014) decreased 3-MH compared to females on creatine. The addition of creatine to drop-set resistance training augments the gains in muscle mass from resistance training alone. Creatine is more effective in untrained aging males compared to untrained aging females. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selectionmore » is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection. Analytical insights lead to valid guiding principles on fusion set size design.« less
Ensemble representations: effects of set size and item heterogeneity on average size perception.
Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W
2013-02-01
Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.
Long-Term Abstract Learning of Attentional Set
ERIC Educational Resources Information Center
Leber, Andrew B.; Kawahara, Jun-Ichiro; Gabari, Yuji
2009-01-01
How does past experience influence visual search strategy (i.e., attentional set)? Recent reports have shown that, when given the option to use 1 of 2 attentional sets, observers persist with the set previously required in a training phase. Here, 2 related questions are addressed. First, does the training effect result only from perseveration with…
Geropsychology Training in a VA Nursing Home Setting
ERIC Educational Resources Information Center
Karel, Michele J.; Moye, Jennifer
2005-01-01
There is a growing need for professional psychology training in nursing home settings, and nursing homes provide a rich environment for teaching geropsychology competencies. We describe the nursing home training component of our Department of Veterans Affairs (VA) Predoctoral Internship and Geropsychology Postdoctoral Fellowship programs. Our…
Acute testosterone and cortisol responses to high power resistance exercise.
Fry, A C; Lohnes, C A
2010-01-01
This study examined the acute hormonal responses to a single high power resistance exercise training session. Four weight trained men (X +/- SD; age [yrs] = 24.5 +/- 2.9; hgt [m] = 1.82 + 0.05; BM [kg] = 96.9 +/- 10.6; I RM barbell squat [kg] = 129.3 +/- 17.4) participated as subjects in two randomly ordered sessions. During the lifting session, serum samples were collected pre- and 5 min post-exercise, and later analyzed for testosterone (Tes), cortisol (Cort), their ratio (Tes/Cort), and lactate (HLa). The lifting protocol was 10 x 5 speed squats at 70% of system mass (1 RM +/- BW) with 2 min inter-set rest intervals. Mean power and velocity were determined for each repetition using an external dynamometer. On the control day, the procedures and times (1600-1900 hrs) were identical except the subjects did not lift. Tes and Cort were analyzed via EIA. Mean +/- SD power and velocity was 1377.1 +/- 9.6 W and 0.79 +/- 0.01 m .s-1 respectively for all repetitions, and did not decrease over the 10 sets (p < 0.05). Although not significant, post-exercise Tes exhibited a very large effect size (nmol x L-1 pre = 12.5 +/- 2.9, post = 20.0 +/- 3.9; Cohen's D = 1.27). No changes were observed for either Cort or the Tes/Cort ratio. HLa significantly increased post-exercise (mmol x L-1 ; pre = 1.00 +/- 0.09, post = 4.85 +/- 1.10). The exercise protocol resulted in no significant changes in Tes, Cort or the Tes/Cort ratio, although the Cohen's D value indicates a very large effect size for the Tes response. The acute increase for Tes is in agreement with previous reports that high power activities can elicit a Tes response. High power resistance exercise protocols such as the one used in the present study produce acute increases of Tes. These results indicate that high power resistance exercise can contribute to an anabolic hormonal response with this type of training, and may partially explain the muscle hypertrophy observed in athletes who routinely employ high power resistance exercise.
Perceptual Learning in Children With Infantile Nystagmus: Effects on Reading Performance.
Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen
2016-08-01
Perceptual learning improves visual acuity and reduces crowding in children with infantile nystagmus (IN). Here, we compare reading performance of 6- to 11-year-old children with IN with normal controls, and evaluate whether perceptual learning improves their reading. Children with IN were divided in two training groups: a crowded training group (n = 18; albinism: n = 8; idiopathic IN: n = 10) and an uncrowded training group (n = 17; albinism: n = 9; idiopathic IN: n = 8). Also 11 children with normal vision participated. Outcome measures were: reading acuity (the smallest readable font size), maximum reading speed, critical print size (font size below which reading is suboptimal), and acuity reserve (difference between reading acuity and critical print size). We used multiple regression analyses to test if these reading parameters were related to the children's uncrowded distance acuity and/or crowding scores. Reading acuity and critical print size were 0.65 ± 0.04 and 0.69 ± 0.08 log units larger for children with IN than for children with normal vision. Maximum reading speed and acuity reserve did not differ between these groups. After training, reading acuity improved by 0.12 ± 0.02 logMAR and critical print size improved by 0.11 ± 0.04 logMAR in both IN training groups. The changes in reading acuity, critical print size, and acuity reserve of children with IN were tightly related to changes in their uncrowded distance acuity and the changes in magnitude and extent of crowding. Our findings are the first to show that visual acuity is not the only factor that restricts reading in children with IN, but that crowding also limits their reading performance. By targeting both of these spatial bottlenecks in children with IN, our perceptual learning paradigms significantly improved their reading acuity and critical print size. This shows that perceptual learning can effectively transfer to reading.
Moulding, Richard; Nedeljkovic, Maja; Kyrios, Michael; Osborne, Debra; Mogan, Christopher
2017-01-01
The study aim was to test whether a 12-week publically rebated group programme, based upon Steketee and Frost's Cognitive Behavioural Therapy-based hoarding treatment, would be efficacious in a community-based setting. Over a 3-year period, 77 participants with clinically significant hoarding were recruited into 12 group programmes. All completed treatment; however, as this was a community-based naturalistic study, only 41 completed the post-treatment assessment. Treatment included psychoeducation about hoarding, skills training for organization and decision making, direct in-session exposure to sorting and discarding, and cognitive and behavioural techniques to support out-of-session sorting and discarding, and nonacquiring. Self-report measures used to assess treatment effect were the Savings Inventory-Revised (SI-R), Savings Cognition Inventory, and the Depression, Anxiety and Stress Scales. Pre-post analyses indicated that after 12 weeks of treatment, hoarding symptoms as measured on the SI-R had reduced significantly, with large effect sizes reported in total and across all subscales. Moderate effect sizes were also reported for hoarding-related beliefs (emotional attachment and responsibility) and depressive symptoms. Of the 41 participants who completed post-treatment questionnaires, 14 (34%) were conservatively calculated to have clinically significant change, which is considerable given the brevity of the programme judged against the typical length of the disorder. The main limitation of the study was the moderate assessment completion rate, given its naturalistic setting. This study demonstrated that a 12-week group treatment for hoarding disorders was effective in reducing hoarding and depressive symptoms in an Australian clinical cohort and provides evidence for use of this treatment approach in a community setting. Copyright © 2016 John Wiley & Sons, Ltd. A 12-week group programme delivered in a community setting was effective for helping with hoarding symptoms with a large effect size. Hoarding beliefs (emotional attachment and responsibility) and depression were reduced, with moderate effect sizes. A third of all participants who completed post-treatment questionnaires experienced clinically significant change. Suggests that hoarding CBT treatment can be effectively translated into real-world settings and into a brief 12-session format, albeit the study had a moderate assessment completion rate. Copyright © 2016 John Wiley & Sons, Ltd.
Balsamo, Sandor; Tibana, Ramires Alsamir; Nascimento, Dahan da Cunha; de Farias, Gleyverton Landim; Petruccelli, Zeno; de Santana, Frederico dos Santos; Martins, Otávio Vanni; de Aguiar, Fernando; Pereira, Guilherme Borges; de Souza, Jéssica Cardoso; Prestes, Jonato
2012-01-01
The super-set is a widely used resistance training method consisting of exercises for agonist and antagonist muscles with limited or no rest interval between them – for example, bench press followed by bent-over rows. In this sense, the aim of the present study was to compare the effects of different super-set exercise sequences on the total training volume. A secondary aim was to evaluate the ratings of perceived exertion and fatigue index in response to different exercise order. On separate testing days, twelve resistance-trained men, aged 23.0 ± 4.3 years, height 174.8 ± 6.75 cm, body mass 77.8 ± 13.27 kg, body fat 12.0% ± 4.7%, were submitted to a super-set method by using two different exercise orders: quadriceps (leg extension) + hamstrings (leg curl) (QH) or hamstrings (leg curl) + quadriceps (leg extension) (HQ). Sessions consisted of three sets with a ten-repetition maximum load with 90 seconds rest between sets. Results revealed that the total training volume was higher for the HQ exercise order (P = 0.02) with lower perceived exertion than the inverse order (P = 0.04). These results suggest that HQ exercise order involving lower limbs may benefit practitioners interested in reaching a higher total training volume with lower ratings of perceived exertion compared with the leg extension plus leg curl order. PMID:22371654
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Phase transitions in restricted Boltzmann machines with generic priors
NASA Astrophysics Data System (ADS)
Barra, Adriano; Genovese, Giuseppe; Sollich, Peter; Tantari, Daniele
2017-10-01
We study generalized restricted Boltzmann machines with generic priors for units and weights, interpolating between Boolean and Gaussian variables. We present a complete analysis of the replica symmetric phase diagram of these systems, which can be regarded as generalized Hopfield models. We underline the role of the retrieval phase for both inference and learning processes and we show that retrieval is robust for a large class of weight and unit priors, beyond the standard Hopfield scenario. Furthermore, we show how the paramagnetic phase boundary is directly related to the optimal size of the training set necessary for good generalization in a teacher-student scenario of unsupervised learning.
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Zhang, Fan; Zhang, Xinhong
2011-01-01
Most of classification, quality evaluation or grading of the flue-cured tobacco leaves are manually operated, which relies on the judgmental experience of experts, and inevitably limited by personal, physical and environmental factors. The classification and the quality evaluation are therefore subjective and experientially based. In this paper, an automatic classification method of tobacco leaves based on the digital image processing and the fuzzy sets theory is presented. A grading system based on image processing techniques was developed for automatically inspecting and grading flue-cured tobacco leaves. This system uses machine vision for the extraction and analysis of color, size, shape and surface texture. Fuzzy comprehensive evaluation provides a high level of confidence in decision making based on the fuzzy logic. The neural network is used to estimate and forecast the membership function of the features of tobacco leaves in the fuzzy sets. The experimental results of the two-level fuzzy comprehensive evaluation (FCE) show that the accuracy rate of classification is about 94% for the trained tobacco leaves, and the accuracy rate of the non-trained tobacco leaves is about 72%. We believe that the fuzzy comprehensive evaluation is a viable way for the automatic classification and quality evaluation of the tobacco leaves. PMID:22163744
Performance Measures for Adaptive Decisioning Systems
1991-09-11
set to hypothesis space mapping best approximates the known map. Two assumptions, a sufficiently representative training set and the ability of the...successful prediction of LINEXT performance. The LINEXT algorithm above performs the decision space mapping on the training-set ele- ments exactly. For a
ERIC Educational Resources Information Center
Padachi, Kesseven; Bhiwajee, Soolakshna Lukea
2016-01-01
Purpose: Training is an important component of successful business concerns. However, although there is growing acceptance amongst scholars that small- and medium-sized enterprises (SMEs) are engines that drive economies across nations, through their contribution in terms of job creation and poverty reduction; extant research portray that these…
NASA Astrophysics Data System (ADS)
Orenstein, E. C.; Morgado, P. M.; Peacock, E.; Sosik, H. M.; Jaffe, J. S.
2016-02-01
Technological advances in instrumentation and computing have allowed oceanographers to develop imaging systems capable of collecting extremely large data sets. With the advent of in situ plankton imaging systems, scientists must now commonly deal with "big data" sets containing tens of millions of samples spanning hundreds of classes, making manual classification untenable. Automated annotation methods are now considered to be the bottleneck between collection and interpretation. Typically, such classifiers learn to approximate a function that predicts a predefined set of classes for which a considerable amount of labeled training data is available. The requirement that the training data span all the classes of concern is problematic for plankton imaging systems since they sample such diverse, rapidly changing populations. These data sets may contain relatively rare, sparsely distributed, taxa that will not have associated training data; a classifier trained on a limited set of classes will miss these samples. The computer vision community, leveraging advances in Convolutional Neural Networks (CNNs), has recently attempted to tackle such problems using "zero-shot" object categorization methods. Under a zero-shot framework, a classifier is trained to map samples onto a set of attributes rather than a class label. These attributes can include visual and non-visual information such as what an organism is made out of, where it is distributed globally, or how it reproduces. A second stage classifier is then used to extrapolate a class. In this work, we demonstrate a zero-shot classifier, implemented with a CNN, to retrieve out-of-training-set labels from images. This method is applied to data from two continuously imaging, moored instruments: the Scripps Plankton Camera System (SPCS) and the Imaging FlowCytobot (IFCB). Results from simulated deployment scenarios indicate zero-shot classifiers could be successful at recovering samples of rare taxa in image sets. This capability will allow ecologists to identify trends in the distribution of difficult to sample organisms in their data.
Coordinating a national rangeland monitoring training program: Success and lessons learned
USDA-ARS?s Scientific Manuscript database
One of the best ways to ensure quality of information gathered in a rangeland monitoring program is through a strong and uniform set of trainings. Curriculum development and delivery of monitoring trainings poses unique challenges that are not seen in academic settings. Participants come from a rang...
Constant size descriptors for accurate machine learning models of molecular properties
NASA Astrophysics Data System (ADS)
Collins, Christopher R.; Gordon, Geoffrey J.; von Lilienfeld, O. Anatole; Yaron, David J.
2018-06-01
Two different classes of molecular representations for use in machine learning of thermodynamic and electronic properties are studied. The representations are evaluated by monitoring the performance of linear and kernel ridge regression models on well-studied data sets of small organic molecules. One class of representations studied here counts the occurrence of bonding patterns in the molecule. These require only the connectivity of atoms in the molecule as may be obtained from a line diagram or a SMILES string. The second class utilizes the three-dimensional structure of the molecule. These include the Coulomb matrix and Bag of Bonds, which list the inter-atomic distances present in the molecule, and Encoded Bonds, which encode such lists into a feature vector whose length is independent of molecular size. Encoded Bonds' features introduced here have the advantage of leading to models that may be trained on smaller molecules and then used successfully on larger molecules. A wide range of feature sets are constructed by selecting, at each rank, either a graph or geometry-based feature. Here, rank refers to the number of atoms involved in the feature, e.g., atom counts are rank 1, while Encoded Bonds are rank 2. For atomization energies in the QM7 data set, the best graph-based feature set gives a mean absolute error of 3.4 kcal/mol. Inclusion of 3D geometry substantially enhances the performance, with Encoded Bonds giving 2.4 kcal/mol, when used alone, and 1.19 kcal/mol, when combined with graph features.
Hopman, J; Hakizimana, B; Meintjes, W A J; Nillessen, M; de Both, E; Voss, A; Mehtar, S
2016-01-01
Hospital-associated infections (HAIs) are more frequently encountered in low- than in high-resource settings. There is a need to identify and implement feasible and sustainable approaches to strengthen HAI prevention in low-resource settings. To evaluate the biological contamination of routinely cleaned mattresses in both high- and low-resource settings. In this two-stage observational study, routine manual bed cleaning was evaluated at two university hospitals using adenosine triphosphate (ATP). Standardized training of cleaning personnel was achieved in both high- and low-resource settings. Qualitative analysis of the cleaning process was performed to identify predictors of cleaning outcome in low-resource settings. Mattresses in low-resource settings were highly contaminated prior to cleaning. Cleaning significantly reduced biological contamination of mattresses in low-resource settings (P < 0.0001). After training, the contamination observed after cleaning in both the high- and low-resource settings seemed comparable. Cleaning with appropriate type of cleaning materials reduced the contamination of mattresses adequately. Predictors for mattresses that remained contaminated in a low-resource setting included: type of product used, type of ward, training, and the level of contamination prior to cleaning. In low-resource settings mattresses were highly contaminated as noted by ATP levels. Routine manual cleaning by trained staff can be as effective in a low-resource setting as in a high-resource setting. We recommend a multi-modal cleaning strategy that consists of training of domestic services staff, availability of adequate time to clean beds between patients, and application of the correct type of cleaning products. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Teaching between-class generalization of toy play behavior to handicapped children.
Haring, T G
1985-01-01
In this study, young children with severe and moderate handicaps were taught to generalize play responses. A multiple baseline across responses design, replicated with four children, was used to assess the effects of generalization training within four sets of toys on generalization to untrained toys from four other sets. The responses taught were unique for each set of toys. Across the four participants, training to generalize within-toy sets resulted in complete between-class generalization in 11 sets, partial generalization in 3 sets, and no generalization in 2 sets. No generalization occurred to another class of toys that differed from the previous sets in that they produced a reaction to the play movement (e.g., pianos). Implications for conducting research using strategies based on class interrelationships in training contexts are discussed. PMID:4019349
Ma, J; Meng, X D; Luo, H M; Zhou, H C; Qu, S L; Liu, X T; Dai, Z
2016-06-01
In order to understand the current management status on education/training and needs for training among new employees working at the provincial CDC in China during 2012-2014, so as to provide basis for setting up related programs at the CDC levels. Based on data gathered through questionnaire surveys run by CDCs from 32 provincial and 5 specifically-designated cities, microsoft excel was used to analyze the current status on management of education and training, for new employees. There were 156 management staff members working on education and training programs in 36 CDCs, with 70% of them having received intermediate or higher levels of education. Large differences were seen on equipment of training hardware in different regions. There were 1 214 teaching staff with 66 percent in the fields or related professional areas on public health, in 2014. 5084 new employees conducted pre/post training programs, from 2012 to 2014 with funding as 750 thousand RMB Yuan. 99.5% of the new employees expressed the needs for further training while. 74% of the new staff members expecting a 2-5 day training program to be implemented. 79% of the new staff members claimed that practice as the most appropriate method for training. Institutional programs set for education and training at the CDCs need to be clarified, with management team organized. It is important to provide more financial support on both hardware, software and human resources related to training programs which are set for new stuff members at all levels of CDCs.
Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu
2016-01-01
The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance–performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system. PMID:27598390
Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu
2016-01-01
The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system.
Hozé, C; Fritz, S; Phocas, F; Boichard, D; Ducrocq, V; Croiseau, P
2014-01-01
Single-breed genomic selection (GS) based on medium single nucleotide polymorphism (SNP) density (~50,000; 50K) is now routinely implemented in several large cattle breeds. However, building large enough reference populations remains a challenge for many medium or small breeds. The high-density BovineHD BeadChip (HD chip; Illumina Inc., San Diego, CA) containing 777,609 SNP developed in 2010 is characterized by short-distance linkage disequilibrium expected to be maintained across breeds. Therefore, combining reference populations can be envisioned. A population of 1,869 influential ancestors from 3 dairy breeds (Holstein, Montbéliarde, and Normande) was genotyped with the HD chip. Using this sample, 50K genotypes were imputed within breed to high-density genotypes, leading to a large HD reference population. This population was used to develop a multi-breed genomic evaluation. The goal of this paper was to investigate the gain of multi-breed genomic evaluation for a small breed. The advantage of using a large breed (Normande in the present study) to mimic a small breed is the large potential validation population to compare alternative genomic selection approaches more reliably. In the Normande breed, 3 training sets were defined with 1,597, 404, and 198 bulls, and a unique validation set included the 394 youngest bulls. For each training set, estimated breeding values (EBV) were computed using pedigree-based BLUP, single-breed BayesC, or multi-breed BayesC for which the reference population was formed by any of the Normande training data sets and 4,989 Holstein and 1,788 Montbéliarde bulls. Phenotypes were standardized by within-breed genetic standard deviation, the proportion of polygenic variance was set to 30%, and the estimated number of SNP with a nonzero effect was about 7,000. The 2 genomic selection (GS) approaches were performed using either the 50K or HD genotypes. The correlations between EBV and observed daughter yield deviations (DYD) were computed for 6 traits and using the different prediction approaches. Compared with pedigree-based BLUP, the average gain in accuracy with GS in small populations was 0.057 for the single-breed and 0.086 for multi-breed approach. This gain was up to 0.193 and 0.209, respectively, with the large reference population. Improvement of EBV prediction due to the multi-breed evaluation was higher for animals not closely related to the reference population. In the case of a breed with a small reference population size, the increase in correlation due to multi-breed GS was 0.141 for bulls without their sire in reference population compared with 0.016 for bulls with their sire in reference population. These results demonstrate that multi-breed GS can contribute to increase genomic evaluation accuracy in small breeds. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Irshad; Gnedin, Nickolay Y.
Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less
Plagianakos, V P; Magoulas, G D; Vrahatis, M N
2006-03-01
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.
de Bruijn, Cornelis Marinus; Houterman, Willem; Ploeg, Margreet; Ducro, Bart; Boshuizen, Berit; Goethals, Klaartje; Verdegaal, Elisabeth-Lidwien; Delesalle, Catherine
2017-02-14
Most Friesian horses reach their anaerobic threshold during a standardized exercise test (SET) which requires lower intensity exercise than daily routine training. to study strengths and weaknesses of an alternative SET-protocol. Two different SETs (SETA and SETB) were applied during a 2 month training period of 9 young Friesian dressage horses. SETB alternated short episodes of canter with trot and walk, lacking long episodes of cantering, as applied in SETA. Following parameters were monitored: blood lactic acid (BLA) after cantering, average heart rate (HR) in trot and maximum HR in canter. HR and BLA of SETA and SETB were analyzed using a paired two-sided T-test and Spearman Correlation-coefficient (p* < 0.05). BLA after cantering was significantly higher in SETA compared to SETB and maximum HR in canter was significantly higher in SETA compared to SETB. The majority of horses showed a significant training response based upon longitudinal follow-up of BLA. Horses with the lowest fitness at start, displayed the largest training response. BLA was significantly lower in week 8 compared to week 0, in both SETA and SETB. A significantly decreased BLA level after cantering was noticeable in week 6 in SETA, whereas in SETB only as of week 8. In SETA a very strong correlation for BLA and average HR at trot was found throughout the entire training period, not for canter. Young Friesian horses do reach their anaerobic threshold during a SET which requires lower intensity than daily routine training. Therefore close monitoring throughout training is warranted. Longitudinal follow up of BLA and not of HR is suitable to assess training response. In the current study, horses that started with the lowest fitness level, showed the largest training response. During training monitoring HR in trot rather than in canter is advised. SETB is best suited as a template for daily training in the aerobic window.
Yu, Jingkai; Finley, Russell L
2009-01-01
High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.
Skill training preferences and technology use in persons with neck and low back pain.
Verbrugghe, Jonas; Haesen, Mieke; Spierings, Ruth; Willems, Kim; Claes, Guido; Olivieri, Enzo; Coninx, Karin; Timmermans, Annick
2017-11-01
Neck pain (NP) and low back pain (LBP) are highly prevalent. Exercise therapy helps, but effect sizes and therapy compliance remain low. Client-centred therapy and technology use may play a role to improve therapy outcomes. To offer technology supported rehabilitation matching patient's goals, training preferences for rehabilitation and technology familiarity need to be known. This study aims to (1) inventory training preferences and motives, (2) evaluate whether these change during rehabilitation, and (3) evaluate familiarity with using technologies, in persons with NP/LBP. Semi-structured interviews were conducted with regard to training preferences and usage of mainstream technological devices. Persons with NP (n = 40) preferred to train on "lifting", "prolonged sitting" and "driving a car". Persons with LBP (n = 40) preferred to train on "household activities", "lifting" and "prolonged walking". Motives were predominantly "ability to work" and "ability to do free time occupations". Preferences shifted in ranking but remained the same during rehabilitation. Participants were familiar with the surveyed technologies. Persons with NP or LBP prefer to train on exercises supporting the improvement of everyday life skills. They use technologies in their professional and personal life, which may lower the threshold for the adoption of rehabilitation technologies. Implications for rehabilitation Persons with neck pain (NP) and persons with low back pain (LBP) prefer to train on specific activities that limit their functional ability during daily tasks. The underlying motives linked to preferred training activities are predominantly "being able to work" and "being able to perform free time occupations". Persons with NP and persons with LBP are accustomed to the use of mainstream technologies and the integration of these technologies in rehabilitation settings seems feasible. In order to enable technology supported rehabilitation that is client-centred, technologies need to offer an extensive number of exercises that support (components of) patient training preferences.
ERIC Educational Resources Information Center
Mills, John; Bowman, Kaye; Crean, David; Ranshaw, Danielle
2012-01-01
This literature review examines the available research on skill sets. It provides background for a larger research project "Workforce skills development and engagement in training through skill sets," the report of which will be released early next year. This paper outlines the origin of skill sets and explains the difference between…
Zhu, Yongjun; Yan, Erjia; Wang, Fei
2017-07-03
Understanding semantic relatedness and similarity between biomedical terms has a great impact on a variety of applications such as biomedical information retrieval, information extraction, and recommender systems. The objective of this study is to examine word2vec's ability in deriving semantic relatedness and similarity between biomedical terms from large publication data. Specifically, we focus on the effects of recency, size, and section of biomedical publication data on the performance of word2vec. We download abstracts of 18,777,129 articles from PubMed and 766,326 full-text articles from PubMed Central (PMC). The datasets are preprocessed and grouped into subsets by recency, size, and section. Word2vec models are trained on these subtests. Cosine similarities between biomedical terms obtained from the word2vec models are compared against reference standards. Performance of models trained on different subsets are compared to examine recency, size, and section effects. Models trained on recent datasets did not boost the performance. Models trained on larger datasets identified more pairs of biomedical terms than models trained on smaller datasets in relatedness task (from 368 at the 10% level to 494 at the 100% level) and similarity task (from 374 at the 10% level to 491 at the 100% level). The model trained on abstracts produced results that have higher correlations with the reference standards than the one trained on article bodies (i.e., 0.65 vs. 0.62 in the similarity task and 0.66 vs. 0.59 in the relatedness task). However, the latter identified more pairs of biomedical terms than the former (i.e., 344 vs. 498 in the similarity task and 339 vs. 503 in the relatedness task). Increasing the size of dataset does not always enhance the performance. Increasing the size of datasets can result in the identification of more relations of biomedical terms even though it does not guarantee better precision. As summaries of research articles, compared with article bodies, abstracts excel in accuracy but lose in coverage of identifiable relations.
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Effect of exercise training on walking mobility in multiple sclerosis: a meta-analysis.
Snook, Erin M; Motl, Robert W
2009-02-01
The study used meta-analytic procedures to examine the overall effect of exercise training interventions on walking mobility among individuals with multiple sclerosis. A search was conducted for published exercise training studies from 1960 to November 2007 using MEDLINE, PsychINFO, CINAHL, and Current Contents Plus. Studies were selected if they measured walking mobility, using instruments identified as acceptable walking mobility constructs and outcome measures for individuals with neurologic disorders, before and after an intervention that included exercise training. Forty-two published articles were located and reviewed, and 22 provided enough data to compute effect sizes expressed as Cohen's d. Sixty-six effect sizes were retrieved from the 22 publications with 600 multiple sclerosis participants and yielded a weighted mean effect size of g = 0.19 (95% confidence interval, 0.09-0.28). There were larger effects associated with supervised exercise training ( g = 0.32), exercise programs that were less than 3 months in duration (g = 0.28), and mixed samples of relapsing-remitting and progressive multiple sclerosis (g = 0.52). The cumulative evidence supports that exercise training is associated with a small improvement in walking mobility among individuals with multiple sclerosis.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
Schoenfeld, Brad J; Contreras, Bret; Vigotsky, Andrew D; Peterson, Mark
2016-12-01
The purpose of the present study was to evaluate muscular adaptations between heavy- and moderate-load resistance training (RT) with all other variables controlled between conditions. Nineteen resistance-trained men were randomly assigned to either a strength-type RT routine (HEAVY) that trained in a loading range of 2-4 repetitions per set (n = 10) or a hypertrophy-type RT routine (MODERATE) that trained in a loading range of 8-12 repetitions per set (n = 9). Training was carried out 3 days a week for 8 weeks. Both groups performed 3 sets of 7 exercises for the major muscle groups of the upper and lower body. Subjects were tested pre- and post-study for: 1 repetition maximum (RM) strength in the bench press and squat, upper body muscle endurance, and muscle thickness of the elbow flexors, elbow extensors, and lateral thigh. Results showed statistically greater increases in 1RM squat strength favoring HEAVY compared to MODERATE. Alternatively, statistically greater increases in lateral thigh muscle thickness were noted for MODERATE versus HEAVY. These findings indicate that heavy load training is superior for maximal strength goals while moderate load training is more suited to hypertrophy-related goals when an equal number of sets are performed between conditions.
Programming "loose training" as a strategy to facilitate language generalization.
Campbell, C R; Stremel-Campbell, K
1982-01-01
This study investigated the generalization of spontaneous complex language behavior across a nontraining setting and the durability of generalization as a result of programming and "loose training" strategy. A within-subject, across-behaviors multiple-baseline design was used to examine the performance of two moderately retarded students in the use of is/are across three syntactic structures (i.e., "wh" questions, "yes/no" reversal questions, and statements). The language training procedure used in this study represented a functional example of programming "loose training." The procedure involved conducting concurrent language training within the context of an academic training task, and establishing a functional reduction in stimulus control by permitting the student to initiate a language response based on a wide array of naturally occurring stimulus events. Concurrent probes were conducted in the free play setting to assess the immediate generalization and the durability of the language behaviors. The results demonstrated that "loose training" was effective in establishing a specific set of language responses with the participants of this investigation. Further, both students demonstrated spontaneous use of the language behavior in the free play generalization setting and a trend was clearly evident for generalization to continue across time. Thus, the methods used appear to be successful for training the use of is/are in three syntactic structures. PMID:7118759
Analysis of precision and accuracy in a simple model of machine learning
NASA Astrophysics Data System (ADS)
Lee, Julian
2017-12-01
Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.
HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.
Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo
2016-03-01
Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Agmon, Maayan; Belza, Basia; Nguyen, Huong Q; Logsdon, Rebecca G; Kelly, Valerie E
2014-01-01
Injury due to falls is a major problem among older adults. Decrements in dual-task postural control performance (simultaneously performing two tasks, at least one of which requires postural control) have been associated with an increased risk of falling. Evidence-based interventions that can be used in clinical or community settings to improve dual-task postural control may help to reduce this risk. THE AIMS OF THIS SYSTEMATIC REVIEW ARE: 1) to identify clinical or community-based interventions that improved dual-task postural control among older adults; and 2) to identify the key elements of those interventions. Studies were obtained from a search conducted through October 2013 of the following electronic databases: PubMed, CINAHL, PsycINFO, and Web of Science. Randomized and nonrandomized controlled studies examining the effects of interventions aimed at improving dual-task postural control among community-dwelling older adults were selected. All studies were evaluated based on methodological quality. Intervention characteristics including study purpose, study design, and sample size were identified, and effects of dual-task interventions on various postural control and cognitive outcomes were noted. Twenty-two studies fulfilled the selection criteria and were summarized in this review to identify characteristics of successful interventions. The ability to synthesize data was limited by the heterogeneity in participant characteristics, study designs, and outcome measures. Dual-task postural control can be modified by specific training. There was little evidence that single-task training transferred to dual-task postural control performance. Further investigation of dual-task training using standardized outcome measurements is needed.
CMU DeepLens: deep learning for automatic image-based galaxy-galaxy strong lens finding
NASA Astrophysics Data System (ADS)
Lanusse, François; Ma, Quanbin; Li, Nan; Collett, Thomas E.; Li, Chun-Liang; Ravanbakhsh, Siamak; Mandelbaum, Rachel; Póczos, Barnabás
2018-01-01
Galaxy-scale strong gravitational lensing can not only provide a valuable probe of the dark matter distribution of massive galaxies, but also provide valuable cosmological constraints, either by studying the population of strong lenses or by measuring time delays in lensed quasars. Due to the rarity of galaxy-scale strongly lensed systems, fast and reliable automated lens finding methods will be essential in the era of large surveys such as Large Synoptic Survey Telescope, Euclid and Wide-Field Infrared Survey Telescope. To tackle this challenge, we introduce CMU DeepLens, a new fully automated galaxy-galaxy lens finding method based on deep learning. This supervised machine learning approach does not require any tuning after the training step which only requires realistic image simulations of strongly lensed systems. We train and validate our model on a set of 20 000 LSST-like mock observations including a range of lensed systems of various sizes and signal-to-noise ratios (S/N). We find on our simulated data set that for a rejection rate of non-lenses of 99 per cent, a completeness of 90 per cent can be achieved for lenses with Einstein radii larger than 1.4 arcsec and S/N larger than 20 on individual g-band LSST exposures. Finally, we emphasize the importance of realistically complex simulations for training such machine learning methods by demonstrating that the performance of models of significantly different complexities cannot be distinguished on simpler simulations. We make our code publicly available at https://github.com/McWilliamsCenter/CMUDeepLens.
Gurd, Brendon J; Patel, Jugal; Edgett, Brittany A; Scribbans, Trisha D; Quadrilatero, Joe; Fischer, Steven L
2018-05-28
Whole body sprint-interval training (WB-SIT) represents a mode of exercise training that is both time-efficient and does not require access to an exercise facility. The current study examined the feasibility of implementing a WB-SIT intervention in a workplace setting. A total of 747 employees from a large office building were invited to participate with 31 individuals being enrolled in the study. Anthropometrics, aerobic fitness, core and upper body strength, and lower body mobility were assessed before and after a 12-week exercise intervention consisting of 2-4 training sessions per week. Each training session required participants to complete 8, 20-second intervals (separated by 10 seconds of rest) of whole body exercise. Proportion of participation was 4.2% while the response rate was 35% (11/31 participants completed post training testing). In responders, compliance to prescribed training was 83±17%, and significant (p < 0.05) improvements were observed for aerobic fitness, push-up performance and lower body mobility. These results demonstrate the efficacy of WB-FIT for improving fitness and mobility in an office setting, but highlight the difficulties in achieving high rates of participation and response in this setting.
NASA Astrophysics Data System (ADS)
Paredes, David; Saha, Ashirbani; Mazurowski, Maciej A.
2017-03-01
Deep learning and convolutional neural networks (CNNs) in particular are increasingly popular tools for segmentation and classification of medical images. CNNs were shown to be successful for segmentation of brain tumors into multiple regions or labels. However, in the environment which fosters data-sharing and collection of multi-institutional datasets, a question arises: does training with data from another institution with potentially different imaging equipment, contrast protocol, and patient population impact the segmentation performance of the CNN? Our study presents preliminary data towards answering this question. Specifically, we used MRI data of glioblastoma (GBM) patients for two institutions present in The Cancer Imaging Archive. We performed a process of training and testing CNN multiple times such that half of the time the CNN was tested on data from the same institution that was used for training and half of the time it was tested on another institution, keeping the training and testing set size constant. We observed a decrease in performance as measured by Dice coefficient when the CNN was trained with data from a different institution as compared to training with data from the same institution. The changes in performance for the entire tumor and for four different labels within the tumor were: 0.72 to 0.65 (p=0.06), 0.61 to 0.58 (p=0.49), 0.54 to 0.51 (p=0.82), 0.31 to 0.24 (p<0.03), and 0.43 to 0.31(p<0.003) respectively. In summary, we found that while data across institutions can be used for development of CNNs, this might be associated with a decrease in performance.
Brydges, Ryan; Carnahan, Heather; Rose, Don; Dubrowski, Adam
2010-08-01
In this paper, we tested the over-arching hypothesis that progressive self-guided learning offers equivalent learning benefit vs. proficiency-based training while limiting the need to set proficiency standards. We have shown that self-guided learning is enhanced when students learn on simulators that progressively increase in fidelity during practice. Proficiency-based training, a current gold-standard training approach, requires achievement of a criterion score before students advance to the next learning level. Baccalaureate nursing students (n = 15/group) practised intravenous catheterization using simulators that differed in fidelity (i.e. students' perceived realism). Data were collected in 2008. Proficiency-based students advanced from low- to mid- to high-fidelity after achieving a proficiency criterion at each level. Progressive students self-guided their progression from low- to mid- to high-fidelity. Yoked control students followed an experimenter-defined progressive practice schedule. Open-ended students moved freely between the simulators. One week after practice, blinded experts evaluated students' skill transfer on a standardized patient simulation. Group differences were examined using analyses of variance. Proficiency-based students scored highest on the high-fidelity post-test (effect size = 1.22). An interaction effect showed that the Progressive and Open-ended groups maintained their performance from post-test to transfer test, whereas the Proficiency-based and Yoked control groups experienced a significant decrease (P < 0.05). Surprisingly, most Open-ended students (73%) chose the progressive practice schedule. Progressive training and proficiency-based training resulted in equivalent transfer test performance, suggesting that progressive students effectively self-guided when to transition between simulators. Students' preference for the progressive practice schedule indicates that educators should consider this sequence for simulation-based training.
Zourdos, Michael C; Jo, Edward; Khamoui, Andy V; Lee, Sang-Rok; Park, Bong-Sup; Ormsbee, Michael J; Panton, Lynn B; Contreras, Robert J; Kim, Jeong-Su
2016-03-01
The primary aim of this study was to compare 2 daily undulating periodization (DUP) models on one-repetition maximum (1RM) strength in the squat, bench press, deadlift, total volume (TV) lifted, and temporal hormone response. Eighteen male, college-aged (21.1 ± 1.9 years) powerlifters participated in this study and were assigned to one of 2 groups: (a) traditional DUP training with a weekly training order: hypertrophy-specific, strength-specific, and power-specific training (HSP, n = 9) or (b) modified DUP training with a weekly training order: hypertrophy-specific, power-specific, and strength-specific training (HPS, n = 9). Both groups trained 3 nonconsecutive days per week for 6 weeks and performed the squat, bench press, and deadlift exercises. During hypertrophy and power sessions, subjects performed a fixed number of sets and repetitions but performed repetitions until failure at a given percentage during strength sessions to compare TV. Testosterone and cortisol were measured at pretesting and posttesting and before each strength-specific day. Hypertrophy, power, and strength produced greater TV in squat and bench press (p ≤ 0.05) than HSP, but not for deadlift (p > 0.05). For squat and deadlift, there was no difference between groups for 1RM (p > 0.05); however, HPS exhibited greater increases in 1RM bench press than HSP (p ≤ 0.05). Effect sizes (ES) showed meaningful differences (ES > 0.50) in favor of HPS for squat and bench press 1RM. Testosterone decreased (p ≤ 0.05) at weeks 5 and 6 and cortisol decline at weeks 3 and 4. However, neither hormone was different at posttesting compared with pretesting (p > 0.05). Our findings suggest that an HPS configuration of DUP has enhanced performance benefits compared with HSP.
Henricks, Walter H; Karcher, Donald S; Harrison, James H; Sinard, John H; Riben, Michael W; Boyer, Philip J; Plath, Sue; Thompson, Arlene; Pantanowitz, Liron
2016-01-01
Context: Recognition of the importance of informatics to the practice of pathology has surged. Training residents in pathology informatics have been a daunting task for most residency programs in the United States because faculty often lacks experience and training resources. Nevertheless, developing resident competence in informatics is essential for the future of pathology as a specialty. Objective: The objective of the study is to develop and deliver a pathology informatics curriculum and instructional framework that guides pathology residency programs in training residents in critical pathology informatics knowledge and skills and meets Accreditation Council for Graduate Medical Education Informatics Milestones. Design: The College of American Pathologists, Association of Pathology Chairs, and Association for Pathology Informatics formed a partnership and expert work group to identify critical pathology informatics training outcomes and to create a highly adaptable curriculum and instructional approach, supported by a multiyear change management strategy. Results: Pathology Informatics Essentials for Residents (PIER) is a rigorous approach for educating all pathology residents in important pathology informatics knowledge and skills. PIER includes an instructional resource guide and toolkit for incorporating informatics training into residency programs that vary in needs, size, settings, and resources. PIER is available at http://www.apcprods.org/PIER (accessed April 6, 2016). Conclusions: PIER is an important contribution to informatics training in pathology residency programs. PIER introduces pathology trainees to broadly useful informatics concepts and tools that are relevant to practice. PIER provides residency program directors with a means to implement a standardized informatics training curriculum, to adapt the approach to local program needs, and to evaluate resident performance and progress over time. PMID:27563486
Aczel, Balazs; Bago, Bence; Szollosi, Aba; Foldes, Andrei; Lukacs, Bence
2015-01-01
The aim of this study was to initiate the exploration of debiasing methods applicable in real-life settings for achieving lasting improvement in decision making competence regarding multiple decision biases. Here, we tested the potentials of the analogical encoding method for decision debiasing. The advantage of this method is that it can foster the transfer from learning abstract principles to improving behavioral performance. For the purpose of the study, we devised an analogical debiasing technique for 10 biases (covariation detection, insensitivity to sample size, base rate neglect, regression to the mean, outcome bias, sunk cost fallacy, framing effect, anchoring bias, overconfidence bias, planning fallacy) and assessed the susceptibility of the participants (N = 154) to these biases before and 4 weeks after the training. We also compared the effect of the analogical training to the effect of ‘awareness training’ and a ‘no-training’ control group. Results suggested improved performance of the analogical training group only on tasks where the violations of statistical principles are measured. The interpretation of these findings require further investigation, yet it is possible that analogical training may be the most effective in the case of learning abstract concepts, such as statistical principles, which are otherwise difficult to master. The study encourages a systematic research of debiasing trainings and the development of intervention assessment methods to measure the endurance of behavior change in decision debiasing. PMID:26300816
Dosanjh, Manjit; Magrin, Giulio
2013-07-01
PARTNER (Particle Training Network for European Radiotherapy) is a project funded by the European Commission's Marie Curie-ITN funding scheme through the ENLIGHT Platform for 5.6 million Euro. PARTNER has brought together academic institutes, research centres and leading European companies, focusing in particular on a specialized radiotherapy (RT) called hadron therapy (HT), interchangeably referred to as particle therapy (PT). The ultimate goal of HT is to deliver more effective treatment to cancer patients leading to major improvement in the health of citizens. In Europe, several hundred million Euro have been invested, since the beginning of this century, in PT. In this decade, the use of HT is rapidly growing across Europe, and there is an urgent need for qualified researchers from a range of disciplines to work on its translational research. In response to this need, the European community of HT, and in particular 10 leading academic institutes, research centres, companies and small and medium-sized enterprises, joined together to form the PARTNER consortium. All partners have international reputations in the diverse but complementary fields associated with PT: clinical, radiobiological and technological. Thus the network incorporates a unique set of competencies, expertise, infrastructures and training possibilities. This paper describes the status and needs of PT research in Europe, the importance of and challenges associated with the creation of a training network, the objectives, the initial results, and the expected long-term benefits of the PARTNER initiative.
Predicting coronary artery disease using different artificial neural network models.
Colak, M Cengiz; Colak, Cemil; Kocatürk, Hasan; Sağiroğlu, Seref; Barutçu, Irfan
2008-08-01
Eight different learning algorithms used for creating artificial neural network (ANN) models and the different ANN models in the prediction of coronary artery disease (CAD) are introduced. This work was carried out as a retrospective case-control study. Overall, 124 consecutive patients who had been diagnosed with CAD by coronary angiography (at least 1 coronary stenosis > 50% in major epicardial arteries) were enrolled in the work. Angiographically, the 113 people (group 2) with normal coronary arteries were taken as control subjects. Multi-layered perceptrons ANN architecture were applied. The ANN models trained with different learning algorithms were performed in 237 records, divided into training (n=171) and testing (n=66) data sets. The performance of prediction was evaluated by sensitivity, specificity and accuracy values based on standard definitions. The results have demonstrated that ANN models trained with eight different learning algorithms are promising because of high (greater than 71%) sensitivity, specificity and accuracy values in the prediction of CAD. Accuracy, sensitivity and specificity values varied between 83.63%-100%, 86.46%-100% and 74.67%-100% for training, respectively. For testing, the values were more than 71% for sensitivity, 76% for specificity and 81% for accuracy. It may be proposed that the use of different learning algorithms other than backpropagation and larger sample sizes can improve the performance of prediction. The proposed ANN models trained with these learning algorithms could be used a promising approach for predicting CAD without the need for invasive diagnostic methods and could help in the prognostic clinical decision.
Education and Training for Clinical Neuropsychologists in Integrated Care Settings.
Roper, Brad L; Block, Cady K; Osborn, Katie; Ready, Rebecca E
2018-05-01
The increasing importance of integrated care necessitates that education and training experiences prepare clinical neuropsychologists for competent practice in integrated care settings, which includes (a) general competence related to an integrated/interdisciplinary approach and (b) competence specific to the setting. Formal neuropsychology training prepares neuropsychologists with a wide range of knowledge and skills in assessment, intervention, teaching/supervision, and research that are relevant to such settings. However, less attention has been paid to the knowledge and skills that directly address functioning within integrated teams, such as the ability to develop, maintain, and expand collaboration across disciplines, bidirectional clinical-research translation and implementation in integrated team settings, and how such collaboration contributes to clinical and research activities. Foundational knowledge and skills relevant to interdisciplinary systems have been articulated as part of competencies for entry into clinical neuropsychology, but their emphasis in education and training programs is unclear. Recommendations and resources are provided regarding how competencies relevant to integrated care can be provided across the continuum of education and training (i.e., doctoral, internship, postdoctoral, and post-licensure).
Toth, Michael J; Miller, Mark S; VanBuren, Peter; Bedrin, Nicholas G; LeWinter, Martin M; Ades, Philip A; Palmer, Bradley M
2012-01-01
Reduced skeletal muscle function in heart failure (HF) patients may be partially explained by altered myofilament protein content and function. Resistance training increases muscle function, although whether these improvements are achieved by correction of myofilament deficits is not known. To address this question, we examined 10 HF patients and 14 controls prior to and following an 18 week high-intensity resistance training programme. Evaluations of whole muscle size and strength, single muscle fibre size, ultrastructure and tension and myosin–actin cross-bridge mechanics and kinetics were performed. Training improved whole muscle isometric torque in both groups, although there were no alterations in whole muscle size or single fibre cross-sectional area or isometric tension. Unexpectedly, training reduced the myofibril fractional area of muscle fibres in both groups. This structural change manifested functionally as a reduction in the number of strongly bound myosin–actin cross-bridges during Ca2+ activation. When post-training single fibre tension data were corrected for the loss of myofibril fractional area, we observed an increase in tension with resistance training. Additionally, training corrected alterations in cross-bridge kinetics (e.g. myosin attachment time) in HF patients back to levels observed in untrained controls. Collectively, our results indicate that improvements in myofilament function in sedentary elderly with and without HF may contribute to increased whole muscle function with resistance training. More broadly, these data highlight novel cellular and molecular adaptations in muscle structure and function that contribute to the resistance-trained phenotype. PMID:22199163
NASA Astrophysics Data System (ADS)
Ravichandran, Kavya; Braman, Nathaniel; Janowczyk, Andrew; Madabhushi, Anant
2018-02-01
Neoadjuvant chemotherapy (NAC) is routinely used to treat breast tumors before surgery to reduce tumor size and improve outcome. However, no current clinical or imaging metrics can effectively predict before treatment which NAC recipients will achieve pathological complete response (pCR), the absence of residual invasive disease in the breast or lymph nodes following surgical resection. In this work, we developed and applied a convolu- tional neural network (CNN) to predict pCR from pre-treatment dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) scans on a per-voxel basis. In this study, DCE-MRI data for a total of 166 breast cancer pa- tients from the ISPY1 Clinical Trial were split into a training set of 133 patients and a testing set of 33 patients. A CNN consisting of 6 convolutional blocks was trained over 30 epochs. The pre-contrast and post-contrast DCE-MRI phases were considered in isolation and conjunction. A CNN utilizing a combination of both pre- and post-contrast images best distinguished responders, with an AUC of 0.77; 82% of the patients in the testing set were correctly classified based on their treatment response. Within the testing set, the CNN was able to produce probability heatmaps that visualized tumor regions that most strongly predicted therapeutic response. Multi- variate analysis with prognostic clinical variables (age, largest diameter, hormone receptor and HER2 status), revealed that the network was an independent predictor of response (p=0.05), and that the inclusion of HER2 status could further improve capability to predict response (AUC = 0.85, accuracy = 85%).
Rajput, Ashish B; Turbin, Dmitry A; Cheang, Maggie Cu; Voduc, David K; Leung, Sam; Gelmon, Karen A; Gilks, C Blake; Huntsman, David G
2008-01-01
We have previously demonstrated in a pilot study of 348 invasive breast cancers that mast cell (MC) infiltrates within primary breast cancers are associated with a good prognosis. Our aim was to verify this finding in a larger cohort of invasive breast cancer patients and examine the relationship between the presence of MCs and other clinical and pathological features. Clinically annotated tissue microarrays (TMAs) containing 4,444 cases were constructed and stained with c-Kit (CD-117) using standard immunoperoxidase techniques to identify and quantify MCs. For statistical analysis, we applied a split-sample validation technique. Breast cancer specific survival was analyzed by Kaplan-Meier [KM] method and log rank test was used to compare survival curves. Survival analysis by KM method showed that the presence of stromal MCs was a favourable prognostic factor in the training set (P = 0.001), and the validation set group (P = 0.006). X-tile plot generated to define the optimal number of MCs showed that the presence of any number of stromal MCs predicted good prognosis. Multivariate analysis showed that the MC effect in the training set (Hazard ratio [HR] = 0.804, 95% Confidence interval [CI], 0.653-0.991, P = 0.041) and validation set analysis (HR = 0.846, 95% CI, 0.683-1.049, P = 0.128) was independent of age, tumor grade, tumor size, lymph node, ER and Her2 status. This study concludes that stromal MC infiltration in invasive breast cancer is an independent good prognostic marker and reiterates the critical role of local inflammatory responses in breast cancer progression.
Rajput, Ashish B.; Turbin, Dmitry A.; Cheang, Maggie CU; Voduc, David K.; Leung, Sam; Gelmon, Karen A.; Gilks, C. Blake
2007-01-01
Purpose We have previously demonstrated in a pilot study of 348 invasive breast cancers that mast cell (MC) infiltrates within primary breast cancers are associated with a good prognosis. Our aim was to verify this finding in a larger cohort of invasive breast cancer patients and examine the relationship between the presence of MCs and other clinical and pathological features. Experimental design Clinically annotated tissue microarrays (TMAs) containing 4,444 cases were constructed and stained with c-Kit (CD-117) using standard immunoperoxidase techniques to identify and quantify MCs. For statistical analysis, we applied a split-sample validation technique. Breast cancer specific survival was analyzed by Kaplan–Meier [KM] method and log rank test was used to compare survival curves. Results Survival analysis by KM method showed that the presence of stromal MCs was a favourable prognostic factor in the training set (P = 0.001), and the validation set group (P = 0.006). X-tile plot generated to define the optimal number of MCs showed that the presence of any number of stromal MCs predicted good prognosis. Multivariate analysis showed that the MC effect in the training set (Hazard ratio [HR] = 0.804, 95% Confidence interval [CI], 0.653–0.991, P = 0.041) and validation set analysis (HR = 0.846, 95% CI, 0.683–1.049, P = 0.128) was independent of age, tumor grade, tumor size, lymph node, ER and Her2 status. Conclusions This study concludes that stromal MC infiltration in invasive breast cancer is an independent good prognostic marker and reiterates the critical role of local inflammatory responses in breast cancer progression. PMID:17431762
Dilation of the oropharynx via selective stimulation of the hypoglossal nerve
NASA Astrophysics Data System (ADS)
Huang, Jingtao; Sahin, Mesut; Durand, Dominique M.
2005-12-01
The functional effects of selective hypoglossal nerve (HG) stimulation with a multi-contact peripheral nerve electrode were assessed using images of the upper airways and the tongue in anesthetized beagles. A biphasic pulse train of 50 Hz frequency and 2 s duration was applied through each one of the tripolar contact sets of the nerve electrode while the pharyngeal images were acquired into a computer. The stimulation current was limited to 20% above the activation threshold for maximum selectivity. The images showed that various contact sets could generate several different activation patterns of the tongue muscles resulting in medial and/or lateral dilation and closing of the airways at the tongue root. Some of these patterns translated into an increase in the oropharyngeal size while others did not have any effect. The pharyngeal sizes were not statistically different during stimulation either between the two different positions of the head (30° and 60°), or when the lateral contacts were compared with the medial ones. The contacts that had the least effect generated an average of 53 ± 15% pharyngeal dilation relative to the best contacts, indicating that the results are marginally sensitive to the contact position around the HG nerve trunk. These results suggest that selective HG nerve stimulation can be a useful technique to produce multiple tongue activation patterns that can dilate the pharynx. This may in turn increase the size of the patient population who can benefit from HG nerve stimulation as a treatment method for obstructive sleep apnea.
Immediate Judgments of Learning are Insensitive to Implicit Interference Effects at Retrieval
Eakin, Deborah K.; Hertzog, Christopher
2013-01-01
We conducted three experiments to determine whether metamemory predictions at encoding, immediate judgments of learning (IJOLs) are sensitive to implicit interference effects that will occur at retrieval. Implicit interference was manipulated by varying the association set size of the cue (Exps. 1 & 2) or the target (Exp. 3). The typical finding is that memory is worse for large-set-size cues and targets, but only when the target is studied alone and later prompted with a related cue (extralist). When the pairs are studied together (intralist), recall is the same regardless of set size; set-size effects are eliminated. Metamemory predictions at retrieval, such as delayed JOLs (DJOLs) and feeling of knowing (FOK) judgments accurately reflect implicit interference effects (e.g., Eakin & Hertzog, 2006). In Experiment 1, we contrasted cue-set-size effects on IJOLs, DJOLs, and FOKs. After wrangling with an interesting methodological conundrum related to set size effects (Exp. 2), we found that whereas DJOLs and FOKs accurately predicted set size effects on retrieval, a comparison between IJOLs and no-cue IJOLs demonstrated that immediate judgments did not vary with set size. In Experiment 3, we confirmed this finding by manipulating target set size. Again, IJOLs did not vary with set size whereas DJOLs and FOKs did. The findings provide further evidence for the inferential view regarding the source of metamemory predictions, as well as indicate that inferences are based on different sources depending on when in the memory process predictions are made. PMID:21915761
ERIC Educational Resources Information Center
Iyioke, Ifeoma Chika
2013-01-01
This dissertation describes a design for training, in accordance with probability judgment heuristics principles, for the Angoff standard setting method. The new training with instruction, practice, and feedback tailored to the probability judgment heuristics principles was called the Heuristic training and the prevailing Angoff method training…
Replacing Maladaptive Speech with Verbal Labeling Responses: An Analysis of Generalized Responding.
ERIC Educational Resources Information Center
Foxx, R. M.; And Others
1988-01-01
Three mentally handicapped students (aged 13, 36, and 40) with maladaptive speech received training to answer questions with verbal labels. The results of their cues-pause-point training showed that the students replaced their maladaptive speech with correct labels (answers) to questions in the training setting and three generalization settings.…
A Model for Teaching Rational Behavior Therapy in a Public School Setting.
ERIC Educational Resources Information Center
Patton, Patricia L.
A training model for the use of rational behavior therapy (RBT) with emotionally disturbed adolescents in a school setting is presented, including a structured, didactic format consisting of five basic RBT training techniques. The training sessions, lasting 10 weeks each, are described. Also presented is the organization for the actual classroom…
Guo, Jing; Chen, Shangxiang; Li, Shun; Sun, Xiaowei; Li, Wei; Zhou, Zhiwei; Chen, Yingbo; Xu, Dazhi
2018-01-12
Several studies have highlighted the prognostic value of the individual and the various combinations of the tumor markers for gastric cancer (GC). Our study was designed to assess establish a new novel model incorporating carcino-embryonic antigen (CEA), carbohydrate antigen 19-9 (CA19-9), carbohydrate antigen 72-4 (CA72-4). A total of 1,566 GC patients (Primary cohort) between Jan 2000 and July 2013 were analyzed. The Primary cohort was randomly divided into Training set (n=783) and Validation set (n=783). A three-tumor marker classifier was developed in the Training set and validated in the Validation set by multivariate regression and risk-score analysis. We have identified a three-tumor marker classifier (including CEA, CA19-9 and CA72-4) for the cancer specific survival (CSS) of GC (p<0.001). Consistent results were obtained in the both Training set and Validation set. Multivariate analysis showed that the classifier was an independent predictor of GC (All p value <0.001 in the Training set, Validation set and Primary cohort). Furthermore, when the leave-one-out approach was performed, the classifier showed superior predictive value to the individual or two of them (with the highest AUC (Area Under Curve); 0.618 for the Training set, and 0.625 for the Validation set), which ascertained its predictive value. Our three-tumor marker classifier is closely associated with the CSS of GC and may serve as a novel model for future decisions concerning treatments.
Cronin, John; Storey, Adam; Zourdos, Michael C.
2016-01-01
ABSTRACT RATINGS OF PERCEIVED EXERTION ARE A VALID METHOD OF ESTIMATING THE INTENSITY OF A RESISTANCE TRAINING EXERCISE OR SESSION. SCORES ARE GIVEN AFTER COMPLETION OF AN EXERCISE OR TRAINING SESSION FOR THE PURPOSES OF ATHLETE MONITORING. HOWEVER, A NEWLY DEVELOPED SCALE BASED ON HOW MANY REPETITIONS ARE REMAINING AT THE COMPLETION OF A SET MAY BE A MORE PRECISE TOOL. THIS APPROACH ADJUSTS LOADS AUTOMATICALLY TO MATCH ATHLETE CAPABILITIES ON A SET-TO-SET BASIS AND MAY MORE ACCURATELY GAUGE INTENSITY AT NEAR-LIMIT LOADS. THIS ARTICLE OUTLINES HOW TO INCORPORATE THIS NOVEL SCALE INTO A TRAINING PLAN. PMID:27531969
ERIC Educational Resources Information Center
Allesch, Jurgen; Preiss-Allesch, Dagmar
This report describes a study that identified major databases in operation in the 12 European Community countries that provide small- and medium-sized enterprises with information on opportunities for obtaining training and continuing education. Thirty-five databases were identified through information obtained from telephone interviews or…
Cascade Back-Propagation Learning in Neural Networks
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2003-01-01
The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights.
Kupas, Katrin; Ultsch, Alfred; Klebe, Gerhard
2008-05-15
A new method to discover similar substructures in protein binding pockets, independently of sequence and folding patterns or secondary structure elements, is introduced. The solvent-accessible surface of a binding pocket, automatically detected as a depression on the protein surface, is divided into a set of surface patches. Each surface patch is characterized by its shape as well as by its physicochemical characteristics. Wavelets defined on surfaces are used for the description of the shape, as they have the great advantage of allowing a comparison at different resolutions. The number of coefficients to describe the wavelets can be chosen with respect to the size of the considered data set. The physicochemical characteristics of the patches are described by the assignment of the exposed amino acid residues to one or more of five different properties determinant for molecular recognition. A self-organizing neural network is used to project the high-dimensional feature vectors onto a two-dimensional layer of neurons, called a map. To find similarities between the binding pockets, in both geometrical and physicochemical features, a clustering of the projected feature vector is performed using an automatic distance- and density-based clustering algorithm. The method was validated with a small training data set of 109 binding cavities originating from a set of enzymes covering 12 different EC numbers. A second test data set of 1378 binding cavities, extracted from enzymes of 13 different EC numbers, was then used to prove the discriminating power of the algorithm and to demonstrate its applicability to large scale analyses. In all cases, members of the data set with the same EC number were placed into coherent regions on the map, with small distances between them. Different EC numbers are separated by large distances between the feature vectors. A third data set comprising three subfamilies of endopeptidases is used to demonstrate the ability of the algorithm to detect similar substructures between functionally related active sites. The algorithm can also be used to predict the function of novel proteins not considered in training data set. 2007 Wiley-Liss, Inc.
Pfile, Kate R.; Hart, Joseph M.; Herman, Daniel C.; Hertel, Jay; Kerrigan, D. Casey; Ingersoll, Christopher D.
2013-01-01
Context: Anterior cruciate ligament (ACL) injuries are common in female athletes and are related to poor neuromuscular control. Comprehensive neuromuscular training has been shown to improve biomechanics; however, we do not know which component of neuromuscular training is most responsible for the changes. Objective: To assess the efficacy of either a 4-week core stability program or plyometric program in altering lower extremity and trunk biomechanics during a drop vertical jump (DVJ). Design: Cohort study. Setting: High school athletic fields and motion analysis laboratory. Patients or Other Participants: Twenty-three high school female athletes (age = 14.8 ± 0.8 years, height = 1.7 ± 0.07 m, mass = 57.7 ± 8.5 kg). Intervention(s): Independent variables were group (core stability, plyometric, control) and time (pretest, posttest). Participants performed 5 DVJs at pretest and posttest. Intervention participants engaged in a 4-week core stability or plyometric program. Main Outcome Measure(s): Dependent variables were 3-dimensional hip, knee, and trunk kinetics and kinematics during the landing phase of a DVJ. We calculated the group means and associated 95% confidence intervals for the first 25% of landing. Cohen d effect sizes with 95% confidence intervals were calculated for all differences. Results: We found within-group differences for lower extremity biomechanics for both intervention groups (P ≤ .05). The plyometric group decreased the knee-flexion and knee internal-rotation angles and the knee-flexion and knee-abduction moments. The core stability group decreased the knee-flexion and knee internal-rotation angles and the hip-flexion and hip internal-rotation moments. The control group decreased the knee external-rotation moment. All kinetic changes had a strong effect size (Cohen d > 0.80). Conclusions: Both programs resulted in biomechanical changes, suggesting that both types of exercises are warranted for ACL injury prevention and should be implemented by trained professionals. PMID:23768121
Verhelst, Helena; Vander Linden, Catharine; Vingerhoets, Guy; Caeyenberghs, Karen
2017-02-01
Computerized cognitive training programs have previously shown to be effective in improving cognitive abilities in patients suffering from traumatic brain injury (TBI). These studies often focused on a single cognitive function or required expensive hardware, making it difficult to be used in a home-based environment. This pilot feasibility study aimed to evaluate the feasibility of a newly developed, home-based, computerized cognitive training program for adolescents who suffered from TBI. Additionally, feasibility of study design, procedures, and measurements were examined. Case series, longitudinal, pilot, feasibility intervention study with one baseline and two follow-up assessments. Nine feasibility outcome measures and criteria for success were defined, including accessibility, training motivation/user experience, technical smoothness, training compliance, participation willingness, participation rates, loss to follow-up, assessment timescale, and assessment procedures. Five adolescent patients (four boys, mean age = 16 years 7 months, standard deviation = 9 months) with moderate to severe TBI in the chronic stage were recruited and received 8 weeks of cognitive training with BrainGames. Effect sizes (Cohen's d) were calculated to determine possible training-related effects. The new cognitive training intervention, BrainGames, and study design and procedures proved to be feasible; all nine feasibility outcome criteria were met during this pilot feasibility study. Estimates of effect sizes showed small to very large effects on cognitive measures and questionnaires, which were retained after 6 months. Our pilot study shows that a longitudinal intervention study comprising our novel, computerized cognitive training program and two follow-up assessments is feasible in adolescents suffering from TBI in the chronic stage. Future studies with larger sample sizes will evaluate training-related effects on cognitive functions and underlying brain structures.
Segmentation of thalamus from MR images via task-driven dictionary learning
NASA Astrophysics Data System (ADS)
Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D.; Prince, Jerry L.
2016-03-01
Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is pro- posed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation overstate-of-the-art atlas-based thalamus segmentation algorithms.
Computer-assisted segmentation of white matter lesions in 3D MR images using support vector machine.
Lao, Zhiqiang; Shen, Dinggang; Liu, Dengfeng; Jawad, Abbas F; Melhem, Elias R; Launer, Lenore J; Bryan, R Nick; Davatzikos, Christos
2008-03-01
Brain lesions, especially white matter lesions (WMLs), are associated with cardiac and vascular disease, but also with normal aging. Quantitative analysis of WML in large clinical trials is becoming more and more important. In this article, we present a computer-assisted WML segmentation method, based on local features extracted from multiparametric magnetic resonance imaging (MRI) sequences (ie, T1-weighted, T2-weighted, proton density-weighted, and fluid attenuation inversion recovery MRI scans). A support vector machine classifier is first trained on expert-defined WMLs, and is then used to classify new scans. Postprocessing analysis further reduces false positives by using anatomic knowledge and measures of distance from the training set. Cross-validation on a population of 35 patients from three different imaging sites with WMLs of varying sizes, shapes, and locations tests the robustness and accuracy of the proposed segmentation method, compared with the manual segmentation results from two experienced neuroradiologists.
Zhang, Zhen; Franklin, Amy; Walji, Muhammad; Zhang, Jiajie; Gong, Yang
2014-01-01
EHR usability has been identified as a major barrier to care quality optimization. One major challenge of improving EHR usability is the lack of systematic training in usability or cognitive ergonomics for EHR designers/developers in the vendor community and EHR analysts making significant configurations in healthcare organizations. A practical solution is to provide usability inspection tools that can be easily operationalized by EHR analysts. This project is aimed at developing a set of usability tools with demonstrated validity and reliability. We present a preliminary study of a metric for cognitive transparency and an exploratory experiment testing its validity in predicting the effectiveness of action-effect mapping. Despite the pilot nature of both, we found high sensitivity and specificity of the metric and higher response accuracy within a shorter time for users to determine action-effect mappings in transparent user interface controls. We plan to expand the sample size in our empirical study. PMID:25954439
Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning.
Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D; Prince, Jerry L
2016-02-27
Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is proposed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation over state-of-the-art atlas-based thalamus segmentation algorithms.
Generalized SMO algorithm for SVM-based multitask learning.
Cai, Feng; Cherkassky, Vladimir
2012-06-01
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
Laser bandwidth interlock capable of single pulse detection and rejection
Armstrong, James P; Telford, Steven James; Lanning, Rodney Kay; Bayramian, Andrew James
2012-10-09
A pulse of laser light is switched out of a pulse train and spatially dispersed into its constituent wavelengths. The pulse is collimated to a suitable size and then diffracted by high groove density multilayer dielectric gratings. This imparts a different angle to each individual wavelength so that, when brought to the far field with a lens, the colors have spread out in a linear arrangement. The distance between wavelengths (resolution) can be tailored for the specific laser and application by altering the number of times the beam strikes the diffraction gratings, the groove density of the gratings and the focal length of the lens. End portions of the linear arrangement are each directed to a respective detector, which converts the signal to a 1 if the level meets a set-point, and a 0 if the level does not. If both detectors produces a 1, then the pulse train is allowed to propagate into an optical system.
Dufour, Marie-Michèle; Lanovaz, Marc J
2017-11-01
The purpose of our study was to compare the effects of serial and concurrent training on the generalization of receptive identification in children with autism spectrum disorders (ASD). We taught one to three pairs of stimulus sets to nine children with ASD between the ages of three and six. One stimulus set within each pair was taught using concurrent training and the other using serial training. We alternated the training sessions within a multielement design and staggered the introduction of subsequent pairs for each participant as in a multiple baseline design. Overall, six participants generalized at least one stimulus set more rapidly with concurrent training whereas two participants showed generalization more rapidly with serial training. Our results differ from other comparison studies on the topic and indicate that practitioners should consider assessing the effects of both procedures prior to teaching receptive identification to children with ASD.
"Functional" Inspiratory and Core Muscle Training Enhances Running Performance and Economy.
Tong, Tomas K; McConnell, Alison K; Lin, Hua; Nie, Jinlei; Zhang, Haifeng; Wang, Jiayuan
2016-10-01
Tong, TK, McConnell, AK, Lin, H, Nie, J, Zhang, H, and Wang, J. "Functional" inspiratory and core muscle training enhances running performance and economy. J Strength Cond Res 30(10): 2942-2951, 2016-We compared the effects of two 6-week high-intensity interval training interventions. Under the control condition (CON), only interval training was undertaken, whereas under the intervention condition (ICT), interval training sessions were followed immediately by core training, which was combined with simultaneous inspiratory muscle training (IMT)-"functional" IMT. Sixteen recreational runners were allocated to either ICT or CON groups. Before the intervention phase, both groups undertook a 4-week program of "foundation" IMT to control for the known ergogenic effect of IMT (30 inspiratory efforts at 50% maximal static inspiratory pressure [P0] per set, 2 sets per day, 6 days per week). The subsequent 6-week interval running training phase consisted of 3-4 sessions per week. In addition, the ICT group undertook 4 inspiratory-loaded core exercises (10 repetitions per set, 2 sets per day, inspiratory load set at 50% post-IMT P0) immediately after each interval training session. The CON group received neither core training nor functional IMT. After the intervention phase, global inspiratory and core muscle functions increased in both groups (p ≤ 0.05), as evidenced by P0 and a sport-specific endurance plank test (SEPT) performance, respectively. Compared with CON, the ICT group showed larger improvements in SEPT, running economy at the speed of the onset of blood lactate accumulation, and 1-hour running performance (3.04% vs. 1.57%, p ≤ 0.05). The changes in these variables were interindividually correlated (r ≥ 0.57, n = 16, p ≤ 0.05). Such findings suggest that the addition of inspiratory-loaded core conditioning into a high-intensity interval training program augments the influence of the interval program on endurance running performance and that this may be underpinned by an improvement in running economy.
A Review and Annotated Bibliography of Armor Gunnery Training Device Effectiveness Literature
1993-11-01
training effectiveness (skill acquisition, skill reten-tion, performance prediction, transfer of training) and (b) research limitations (sample size...standalone, tank-appended, subcaliber, and laser) and four areas of training effectiveness (skill acquisition, skill retention, performance prediction, and...standalone, tank-appended, subcaliber, laser) and areas of training effectiveness (skill acquisition, skill retention, performance prediction, transfer of
Training transfer: a systematic review of the impact of inner setting factors.
Jackson, Carrie B; Brabson, Laurel A; Quetsch, Lauren B; Herschell, Amy D
2018-06-19
Consistent with Baldwin and Ford's model (Pers Psychol 41(1):63-105, 1988), training transfer is defined as the generalization of learning from a training to everyday practice in the workplace. The purpose of this review was to examine the influence of work-environment factors, one component of the model hypothesized to influence training transfer within behavioral health. An electronic literature search guided by the Consolidated Framework for Implementation Research's inner setting domain was conducted was conducted on Medline OVID, Medline EMBASE, and PsycINFO databases. Of 9184 unique articles, 169 full-text versions of articles were screened for eligibility, yielding 26 articles meeting inclusion criteria. Results from the 26 studies revealed that overall, having more positive networks and communication, culture, implementation climate, and readiness for implementation can facilitate training transfer. Although few studies have examined the impact of inner setting factors on training transfer, these results suggest organizational context is important to consider with training efforts. These findings have important implications for individuals in the broader health professions educational field.
Eigendorf, Julian; May, Marcus; Friedrich, Jan; Engeli, Stefan; Maassen, Norbert; Gros, Gerolf; Meissner, Joachim D
2018-01-01
We present here a longitudinal study determining the effects of two 3 week-periods of high intensity high volume interval training (HIHVT) (90 intervals of 6 s cycling at 250% maximum power, P max /24 s) on a cycle ergometer. HIHVT was evaluated by comparing performance tests before and after the entire training (baseline, BSL, and endpoint, END) and between the two training sets (intermediate, INT). The mRNA expression levels of myosin heavy chain (MHC) isoforms and markers of energy metabolism were analyzed in M. vastus lateralis biopsies by quantitative real-time PCR. In incremental tests peak power (P peak ) was increased, whereas V ˙ O 2peak was unaltered. Prolonged time-to-exhaustion was found in endurance tests with 65 and 80% P max at INT and END. No changes in blood levels of lipid metabolites were detected. Training-induced decreases of hematocrit indicate hypervolemia. A shift from slow MHCI/β to fast MHCIIa mRNA expression occurred after the first and second training set. The mRNA expression of peroxisome proliferator-activated receptor gamma coactivator 1α (PGC-1α), a master regulator of oxidative energy metabolism, decreased after the second training set. In agreement, a significant decrease was also found for citrate synthase mRNA after the second training set, indicating reduced oxidative capacity. However, mRNA expression levels of glycolytic marker enzyme glyceraldehyde-3-phosphate dehydrogenase did not change after the first and second training set. HIHVT induced a nearly complete slow-to-fast fiber type transformation on the mRNA level, which, however, cannot account for the improvements of performance parameters. The latter might be explained by the well-known effects of hypervolemia on exercise performance.
Training practices and ergogenic aids used by male bodybuilders.
Hackett, Daniel A; Johnson, Nathan A; Chow, Chin-Moi
2013-06-01
Bodybuilding involves performing a series of poses on stage where the competitor is judged on aesthetic muscular appearance. The purpose of this study was to describe training practices and ergogenic aids used by competitive bodybuilders and to determine whether training practices comply with current recommendations for muscular hypertrophy. A web-based survey was completed by 127 competitive male bodybuilders. The results showed that during the off-season phase of training (OFF), the majority of respondents performed 3-6 sets per exercise (95.3%), 7-12 repetition maximum (RM) per set (77.0%), and 61- to 120-seconds recovery between sets and exercises (68.6%). However, training practices changed 6 weeks before competition (PRE), where there was an increased number of respondents who reported undertaking 3-4 sets per exercise at the expense of 5-6 sets per exercise (p < 0.001), an increase in the number reporting 10-15RM per set from 7-9RM per set (p < 0.001), and an increase in the number reporting 30-60 seconds vs. 61-180 seconds recovery between sets and exercises (p < 0.001). Anabolic steroid use was high among respondents competing in amateur competitions (56 of 73 respondents), whereas dietary supplementation was used by all respondents. The findings of this study demonstrate that competitive bodybuilders comply with current resistance exercise recommendations for muscular hypertrophy; however, these changed before competition during which there is a reduction resistance training volume and intensity. This alteration, in addition to an increase in aerobic exercise volume, is purportedly used to increase muscle definition. However, these practices may increase the risk of muscle mass loss in natural compared with amateur bodybuilders who reportedly use drugs known to preserve muscle mass.
Peer-to-peer mentoring for individuals with early inflammatory arthritis: feasibility pilot
Sandhu, Sharron; Veinot, Paula; Embuldeniya, Gayathri; Brooks, Sydney; Sale, Joanna; Huang, Sicong; Zhao, Alex; Richards, Dawn; Bell, Mary J
2013-01-01
Objectives To examine the feasibility and potential benefits of early peer support to improve the health and quality of life of individuals with early inflammatory arthritis (EIA). Design Feasibility study using the 2008 Medical Research Council framework as a theoretical basis. A literature review, environmental scan, and interviews with patients, families and healthcare providers guided the development of peer mentor training sessions and a peer-to-peer mentoring programme. Peer mentors were trained and paired with a mentee to receive (face-to-face or telephone) support over 12 weeks. Setting Two academic teaching hospitals in Toronto, Ontario, Canada. Participants Nine pairs consisting of one peer mentor and one mentee were matched based on factors such as age and work status. Primary outcome measure Mentee outcomes of disease modifying antirheumatic drugs (DMARDs)/biological treatment use, self-efficacy, self-management, health-related quality of life, anxiety, coping efficacy, social support and disease activity were measured using validated tools. Descriptive statistics and effect sizes were calculated to determine clinically important (>0.3) changes. Peer mentor self-efficacy was assessed using a self-efficacy scale. Interviews conducted with participants examined acceptability and feasibility of procedures and outcome measures, as well as perspectives on the value of peer support for individuals with EIA. Themes were identified through constant comparison. Results Mentees experienced improvements in the overall arthritis impact on life, coping efficacy and social support (effect size >0.3). Mentees also perceived emotional, informational, appraisal and instrumental support. Mentors also reported benefits and learnt from mentees’ fortitude and self-management skills. The training was well received by mentors. Their self-efficacy increased significantly after training completion. Participants’ experience of peer support was informed by the unique relationship with their peer. All participants were unequivocal about the need for peer support for individuals with EIA. Conclusions The intervention was well received. Training, peer support programme and outcome measures were demonstrated to be feasible with modifications. Early peer support may augment current rheumatological care. Trial registration number NCT01054963, NCT01054131. PMID:23457326
Cicero, Mark; Bilbily, Alexander; Colak, Errol; Dowdell, Tim; Gray, Bruce; Perampaladas, Kuhan; Barfett, Joseph
2017-05-01
Convolutional neural networks (CNNs) are a subtype of artificial neural network that have shown strong performance in computer vision tasks including image classification. To date, there has been limited application of CNNs to chest radiographs, the most frequently performed medical imaging study. We hypothesize CNNs can learn to classify frontal chest radiographs according to common findings from a sufficiently large data set. Our institution's research ethics board approved a single-center retrospective review of 35,038 adult posterior-anterior chest radiographs and final reports performed between 2005 and 2015 (56% men, average age of 56, patient type: 24% inpatient, 39% outpatient, 37% emergency department) with a waiver for informed consent. The GoogLeNet CNN was trained using 3 graphics processing units to automatically classify radiographs as normal (n = 11,702) or into 1 or more of cardiomegaly (n = 9240), consolidation (n = 6788), pleural effusion (n = 7786), pulmonary edema (n = 1286), or pneumothorax (n = 1299). The network's performance was evaluated using receiver operating curve analysis on a test set of 2443 radiographs with the criterion standard being board-certified radiologist interpretation. Using 256 × 256-pixel images as input, the network achieved an overall sensitivity and specificity of 91% with an area under the curve of 0.964 for classifying a study as normal (n = 1203). For the abnormal categories, the sensitivity, specificity, and area under the curve, respectively, were 91%, 91%, and 0.962 for pleural effusion (n = 782), 82%, 82%, and 0.868 for pulmonary edema (n = 356), 74%, 75%, and 0.850 for consolidation (n = 214), 81%, 80%, and 0.875 for cardiomegaly (n = 482), and 78%, 78%, and 0.861 for pneumothorax (n = 167). Current deep CNN architectures can be trained with modest-sized medical data sets to achieve clinically useful performance at detecting and excluding common pathology on chest radiographs.
Mobile learning for HIV/AIDS healthcare worker training in resource-limited settings
2010-01-01
Background We present an innovative approach to healthcare worker (HCW) training using mobile phones as a personal learning environment. Twenty physicians used individual Smartphones (Nokia N95 and iPhone), each equipped with a portable solar charger. Doctors worked in urban and peri-urban HIV/AIDS clinics in Peru, where almost 70% of the nation's HIV patients in need are on treatment. A set of 3D learning scenarios simulating interactive clinical cases was developed and adapted to the Smartphones for a continuing medical education program lasting 3 months. A mobile educational platform supporting learning events tracked participant learning progress. A discussion forum accessible via mobile connected participants to a group of HIV specialists available for back-up of the medical information. Learning outcomes were verified through mobile quizzes using multiple choice questions at the end of each module. Methods In December 2009, a mid-term evaluation was conducted, targeting both technical feasibility and user satisfaction. It also highlighted user perception of the program and the technical challenges encountered using mobile devices for lifelong learning. Results With a response rate of 90% (18/20 questionnaires returned), the overall satisfaction of using mobile tools was generally greater for the iPhone. Access to Skype and Facebook, screen/keyboard size, and image quality were cited as more troublesome for the Nokia N95 compared to the iPhone. Conclusions Training, supervision and clinical mentoring of health workers are the cornerstone of the scaling up process of HIV/AIDS care in resource-limited settings (RLSs). Educational modules on mobile phones can give flexibility to HCWs for accessing learning content anywhere. However lack of softwares interoperability and the high investment cost for the Smartphones' purchase could represent a limitation to the wide spread use of such kind mLearning programs in RLSs. PMID:20825677
Importance of eccentric actions in performance adaptations to resistance training
NASA Technical Reports Server (NTRS)
Dudley, Gary A.; Miller, Bruce J.; Buchanan, Paul; Tesch, Per A.
1991-01-01
The importance of eccentric (ecc) muscle actions in resistance training for the maintenance of muscle strength and mass in hypogravity was investigated in experiments in which human subjects, divided into three groups, were asked to perform four-five sets of 6 to 12 repetitions (rep) per set of three leg press and leg extension exercises, 2 days each weeks for 19 weeks. One group, labeled 'con', performed each rep with only concentric (con) actions, while group con/ecc with performed each rep with only ecc actions; the third group, con/con, performed twice as many sets with only con actions. Control subjects did not train. It was found that resistance training wih both con and ecc actions induced greater increases in muscle strength than did training with only con actions.
Teaching adolescents with severe disabilities to use the public telephone.
Test, D W; Spooner, F; Keul, P K; Grossi, T
1990-04-01
Two adolescents with severe disabilities served as participants in a study conducted to train in the use of the public telephone to call home. Participants were trained to complete a 17-step task analysis using a training package which consisted of total task presentation in conjunction with a four-level prompting procedure (i.e., independent, verbal, verbal + gesture, verbal + guidance). All instruction took place in a public setting (e.g., a shopping mall) with generalization probes taken in two alternative settings (e.g., a movie theater and a convenience store). A multiple probe across individuals design demonstrated the training package was successful in teaching participants to use the telephone to call home. In addition, newly acquired skills generalized to the two untrained settings. Implications for community-based training are discussed.
Petzoldt, Tibor
2016-10-01
Crashes at railway level crossings are a key problem for railway operations. It has been suggested that a potential explanation for such crashes might lie in a so-called size speed bias, which describes the phenomenon that observers underestimate the speed of larger objects, such as aircraft or trains. While there is some evidence that this size speed bias indeed exists, it is somewhat at odds with another well researched phenomenon, the size arrival effect. When asked to judge the time it takes an approaching object to arrive at a predefined position (time to arrival, TTA), observers tend to provide lower estimates for larger objects. In that case, road users' crossing decisions when confronted with larger vehicles should be rather conservative, which has been confirmed in multiple studies on gap acceptance. The aim of the experiment reported in this paper was to clarify the relationship between size speed bias and size arrival effect. Employing a relative judgment task, both speed and TTA estimates were assessed for virtual depictions of a train and a truck, using a car as a reference to compare against. The results confirmed the size speed bias for the speed judgments, with both train and truck being perceived as travelling slower than the car. A comparable bias was also present in the TTA estimates for the truck. In contrast, no size arrival effect could be found for the train or the truck, neither in the speed nor the TTA judgments. This finding is inconsistent with the fact that crossing behaviour when confronted with larger vehicles appears to be consistently more conservative. This discrepancy might be interpreted as an indication that factors other than perceived speed or TTA play an important role for the differences in gap acceptance between different types of vehicles. Copyright © 2016 Elsevier Ltd. All rights reserved.
Weng, Ziqing; Wolc, Anna; Shen, Xia; Fernando, Rohan L; Dekkers, Jack C M; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Garrick, Dorian J
2016-03-19
Genomic estimated breeding values (GEBV) based on single nucleotide polymorphism (SNP) genotypes are widely used in animal improvement programs. It is typically assumed that the larger the number of animals is in the training set, the higher is the prediction accuracy of GEBV. The aim of this study was to quantify genomic prediction accuracy depending on the number of ancestral generations included in the training set, and to determine the optimal number of training generations for different traits in an elite layer breeding line. Phenotypic records for 16 traits on 17,793 birds were used. All parents and some selection candidates from nine non-overlapping generations were genotyped for 23,098 segregating SNPs. An animal model with pedigree relationships (PBLUP) and the BayesB genomic prediction model were applied to predict EBV or GEBV at each validation generation (progeny of the most recent training generation) based on varying numbers of immediately preceding ancestral generations. Prediction accuracy of EBV or GEBV was assessed as the correlation between EBV and phenotypes adjusted for fixed effects, divided by the square root of trait heritability. The optimal number of training generations that resulted in the greatest prediction accuracy of GEBV was determined for each trait. The relationship between optimal number of training generations and heritability was investigated. On average, accuracies were higher with the BayesB model than with PBLUP. Prediction accuracies of GEBV increased as the number of closely-related ancestral generations included in the training set increased, but reached an asymptote or slightly decreased when distant ancestral generations were used in the training set. The optimal number of training generations was 4 or more for high heritability traits but less than that for low heritability traits. For less heritable traits, limiting the training datasets to individuals closely related to the validation population resulted in the best predictions. The effect of adding distant ancestral generations in the training set on prediction accuracy differed between traits and the optimal number of necessary training generations is associated with the heritability of traits.
Company Training. A Key Strategy for Success. Workforce Brief #2.
ERIC Educational Resources Information Center
Bergman, Terri
General research and anecdotal reports have confirmed that both technical and basic skills training offer many benefits to companies of all sizes. Company training can improve employee performance, firm productivity, product quality, and company profitability. Training supports "high-performance" work practices such as the following: total quality…
Lack of training threatening drilling talent supply
DOE Office of Scientific and Technical Information (OSTI.GOV)
Von Flatern, R.
When oil prices crashed in the mid-1980s, the industry tightened budgets. Among the austerity measures taken to survive the consequences of low product prices was an end to expensive, long-term investment training of drilling engineers. In the absence of traditional sources of trained drilling talent, forward-looking contractors are creating their own training programs. The paper describes the activities of some companies who are setting up their own training programs, and an alliance being set up by Chevron and Amoco for training. The paper also discusses training drilling managers, third-party trainers, and the consequences for the industry that does not renewmore » its inventory of people.« less
Training in interprofessional collaboration: pedagogic innovation in family medicine units.
Paré, Line; Maziade, Jean; Pelletier, Francine; Houle, Nathalie; Iloko-Fundi, Maximilien
2012-04-01
A number of agencies that accredit university health sciences programs recently added standards for the acquisition of knowledge and skills with respect to interprofessional collaboration. Within primary care settings there are no practical training programs that allow students from different disciplines to develop competencies in this area. The training program was developed within family medicine units affiliated with Université Laval in Quebec for family medicine residents and trainees from various disciplines to develop competencies in patient-centred, interprofessional collaborative practice in primary care. Based on adult learning theories, the program was divided into 3 phases--preparing family medicine unit professionals, training preceptors, and training the residents and trainees. The program's pedagogic strategies allowed participants to learn with, from, and about one another while preparing them to engage in contemporary primary care practices. A combination of quantitative and qualitative methods was used to evaluate the implementation process and the immediate results of the training program. The training program had a positive effect on both the clinical settings and the students. Preparation of clinical settings is an important issue that must be considered when planning practical interprofessional training.
Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford
2018-04-01
Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.
Goldberg, D P; Gask, L; Zakroyeva, A; Proselkova, E; Ryzhkova, N; Williams, P
2012-12-01
Background The Arkhangelsk Oblast is an area the size of France with a sparsely distributed population. The existing primary care staff have had very little training in the management of mental health disorders, despite the frequency of these disorders in the population. They requested special teaching on depression, suicide, somatisation and alcohol problems. Methods An educational intervention was developed in partnership with mental health and primary care staff in Russia, to develop mental health skills using established, evidence-based methods. After a preliminary demonstration of teaching methods to be employed, a 5-day full-time teaching course was offered to trainers of general practitioners and feldshers. Results The findings are presented by providing details of improvements that occurred over a 3-month period in four areas, namely depression in primary care, somatic presentations of distress, dealing with suicidal patients, and alcohol problems. We present preliminary data on how the training has generalised since our visits to Archangelsk. Conclusions Teachers who are used to teaching by didactic lectures can be taught the value of short introductory talks that invite discussion, and mental health skills can be taught using role play. The content of such training should be driven by perceived local needs, and developed in conjunction with local leaders and teachers within primary care services. Further research will be needed to establish the impact on clinical outcomes.
Diwan, Faizan; Makana, Grace; McKenzie, David; Paruzzolo, Silvia
2014-01-01
Business training programs are a common form of support to small businesses, but organizations providing this training often struggle to get business owners to attend. We evaluate the role of invitation choice structure in determining agreement to participate and actual attendance. A field experiment randomly assigned female small business owners in Kenya (N = 1172) to one of three invitation types: a standard opt-in invitation; an active choice invitation where business owners had to explicitly say yes or no to the invitation; and an enhanced active choice invitation which highlighted the costs of saying no. We find no statistically significant effect of these alternative choice structures on willingness to participate in training, attending at least one day, and completing the course. The 95 percent confidence interval for the active treatment effect on attendance is [-1.9%, +9.5%], while for the enhanced active choice treatment it is [-4.1%, +7.7%]. The effect sizes consistent with our data are smaller than impacts measured in health and retirement savings studies in the United States. We examine several potential explanations for the lack of effect in a developing country setting. We find evidence consistent with two potential reasons being limited decision-making power amongst some women, and lower levels of cognition making the enhanced active choice wording less effective.
Deep learning-based depth estimation from a synthetic endoscopy image training set
NASA Astrophysics Data System (ADS)
Mahmood, Faisal; Durr, Nicholas J.
2018-03-01
Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.
Xie, Hong-Bo; Huang, Hu; Wu, Jianhua; Liu, Lei
2015-02-01
We present a multiclass fuzzy relevance vector machine (FRVM) learning mechanism and evaluate its performance to classify multiple hand motions using surface electromyographic (sEMG) signals. The relevance vector machine (RVM) is a sparse Bayesian kernel method which avoids some limitations of the support vector machine (SVM). However, RVM still suffers the difficulty of possible unclassifiable regions in multiclass problems. We propose two fuzzy membership function-based FRVM algorithms to solve such problems, based on experiments conducted on seven healthy subjects and two amputees with six hand motions. Two feature sets, namely, AR model coefficients and room mean square value (AR-RMS), and wavelet transform (WT) features, are extracted from the recorded sEMG signals. Fuzzy support vector machine (FSVM) analysis was also conducted for wide comparison in terms of accuracy, sparsity, training and testing time, as well as the effect of training sample sizes. FRVM yielded comparable classification accuracy with dramatically fewer support vectors in comparison with FSVM. Furthermore, the processing delay of FRVM was much less than that of FSVM, whilst training time of FSVM much faster than FRVM. The results indicate that FRVM classifier trained using sufficient samples can achieve comparable generalization capability as FSVM with significant sparsity in multi-channel sEMG classification, which is more suitable for sEMG-based real-time control applications.
Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks
NASA Astrophysics Data System (ADS)
Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie
2017-03-01
Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.
Diwan, Faizan; Makana, Grace; McKenzie, David; Paruzzolo, Silvia
2014-01-01
Business training programs are a common form of support to small businesses, but organizations providing this training often struggle to get business owners to attend. We evaluate the role of invitation choice structure in determining agreement to participate and actual attendance. A field experiment randomly assigned female small business owners in Kenya (N = 1172) to one of three invitation types: a standard opt-in invitation; an active choice invitation where business owners had to explicitly say yes or no to the invitation; and an enhanced active choice invitation which highlighted the costs of saying no. We find no statistically significant effect of these alternative choice structures on willingness to participate in training, attending at least one day, and completing the course. The 95 percent confidence interval for the active treatment effect on attendance is [−1.9%, +9.5%], while for the enhanced active choice treatment it is [−4.1%, +7.7%]. The effect sizes consistent with our data are smaller than impacts measured in health and retirement savings studies in the United States. We examine several potential explanations for the lack of effect in a developing country setting. We find evidence consistent with two potential reasons being limited decision-making power amongst some women, and lower levels of cognition making the enhanced active choice wording less effective. PMID:25299647
Murphy, Alistair P; Duffield, Rob; Kellett, Aaron; Reid, Machar
2016-01-01
High-performance tennis environments aim to prepare athletes for competitive demands through simulated-match scenarios and drills. With a dearth of direct comparisons between training and tournament demands, the current investigation compared the perceptual and technical characteristics of training drills, simulated match play, and tournament matches. Data were collected from 18 high-performance junior tennis players (gender: 10 male, 8 female; age 16 ± 1.1 y) during 6 ± 2 drill-based training sessions, 5 ± 2 simulated match-play sessions, and 5 ± 3 tournament matches from each participant. Tournament matches were further distinguished by win or loss and against seeded or nonseeded opponents. Notational analysis of stroke and error rates, winners, and serves, along with rating of perceived physical exertion (RPE) and mental exertion was measured postsession. Repeated-measures analyses of variance and effect-size analysis revealed that training sessions were significantly shorter in duration than tournament matches (P < .05, d = 1.18). RPEs during training and simulated match-play sessions were lower than in tournaments (P > .05; d = 1.26, d = 1.05, respectively). Mental exertion in training was lower than in both simulated match play and tournaments (P > .05; d = 1.10, d = 0.86, respectively). Stroke rates during tournaments exceeded those observed in training (P < .05, d = 3.41) and simulated-match-play (P < .05, d = 1.22) sessions. Furthermore, the serve was used more during tournaments than simulated match play (P < .05, d = 4.28), while errors and winners were similar independent of setting (P > .05, d < 0.80). Training in the form of drills or simulated match play appeared to inadequately replicate tournament demands in this cohort of players. Coaches should be mindful of match demands to best prescribe sessions of relevant duration, as well as internal (RPE) and technical (stroke rate) load, to aid tournament preparation.