Sample records for optimal training methods

  1. Finding the optimal shape of the leading-and-trailing car of a high-speed train using design-by-morphing

    NASA Astrophysics Data System (ADS)

    Oh, Sahuck; Jiang, Chung-Hsiang; Jiang, Chiyu; Marcus, Philip S.

    2017-10-01

    We present a new, general design method, called design-by-morphing for an object whose performance is determined by its shape due to hydrodynamic, aerodynamic, structural, or thermal requirements. To illustrate the method, we design a new leading-and-trailing car of a train by morphing existing, baseline leading-and-trailing cars to minimize the drag. In design-by-morphing, the morphing is done by representing the shapes with polygonal meshes and spectrally with a truncated series of spherical harmonics. The optimal design is found by computing the optimal weights of each of the baseline shapes so that the morphed shape has minimum drag. As a result of optimization, we found that with only two baseline trains that mimic current high-speed trains with low drag that the drag of the optimal train is reduced by 8.04% with respect to the baseline train with the smaller drag. When we repeat the optimization by adding a third baseline train that under-performs compared to the other baseline train, the drag of the new optimal train is reduced by 13.46% . This finding shows that bad examples of design are as useful as good examples in determining an optimal design. We show that design-by-morphing can be extended to many engineering problems in which the performance of an object depends on its shape.

  2. Finding the optimal shape of the leading-and-trailing car of a high-speed train using design-by-morphing

    NASA Astrophysics Data System (ADS)

    Oh, Sahuck; Jiang, Chung-Hsiang; Jiang, Chiyu; Marcus, Philip S.

    2018-07-01

    We present a new, general design method, called design-by-morphing for an object whose performance is determined by its shape due to hydrodynamic, aerodynamic, structural, or thermal requirements. To illustrate the method, we design a new leading-and-trailing car of a train by morphing existing, baseline leading-and-trailing cars to minimize the drag. In design-by-morphing, the morphing is done by representing the shapes with polygonal meshes and spectrally with a truncated series of spherical harmonics. The optimal design is found by computing the optimal weights of each of the baseline shapes so that the morphed shape has minimum drag. As a result of optimization, we found that with only two baseline trains that mimic current high-speed trains with low drag that the drag of the optimal train is reduced by 8.04% with respect to the baseline train with the smaller drag. When we repeat the optimization by adding a third baseline train that under-performs compared to the other baseline train, the drag of the new optimal train is reduced by 13.46%. This finding shows that bad examples of design are as useful as good examples in determining an optimal design. We show that design-by-morphing can be extended to many engineering problems in which the performance of an object depends on its shape.

  3. Optimizing Preseason Training Loads in Australian Football.

    PubMed

    Carey, David L; Crow, Justin; Ong, Kok-Leong; Blanch, Peter; Morris, Meg E; Dascombe, Ben J; Crossley, Kay M

    2018-02-01

    To investigate whether preseason training plans for Australian football can be computer generated using current training-load guidelines to optimize injury-risk reduction and performance improvement. A constrained optimization problem was defined for daily total and sprint distance, using the preseason schedule of an elite Australian football team as a template. Maximizing total training volume and maximizing Banister-model-projected performance were both considered optimization objectives. Cumulative workload and acute:chronic workload-ratio constraints were placed on training programs to reflect current guidelines on relative and absolute training loads for injury-risk reduction. Optimization software was then used to generate preseason training plans. The optimization framework was able to generate training plans that satisfied relative and absolute workload constraints. Increasing the off-season chronic training loads enabled the optimization algorithm to prescribe higher amounts of "safe" training and attain higher projected performance levels. Simulations showed that using a Banister-model objective led to plans that included a taper in training load prior to competition to minimize fatigue and maximize projected performance. In contrast, when the objective was to maximize total training volume, more frequent training was prescribed to accumulate as much load as possible. Feasible training plans that maximize projected performance and satisfy injury-risk constraints can be automatically generated by an optimization problem for Australian football. The optimization methods allow for individualized training-plan design and the ability to adapt to changing training objectives and different training-load metrics.

  4. Multi-Objective Aerodynamic Optimization of the Streamlined Shape of High-Speed Trains Based on the Kriging Model.

    PubMed

    Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei

    2017-01-01

    Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.

  5. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  6. Optimizing Word Learning via Links to Perceptual and Motoric Experience

    ERIC Educational Resources Information Center

    Hald, Lea A.; de Nooijer, Jacqueline; van Gog, Tamara; Bekkering, Harold

    2016-01-01

    The aim of this review is to consider how current vocabulary training methods could be optimized by considering recent scientific insights in how the brain represents conceptual knowledge. We outline the findings from several methods of vocabulary training. In each case, we consider how taking an embodied cognition perspective could impact word…

  7. Staff Study on Cost and Training Effectiveness of Proposed Training Systems. TAEG Report 1.

    ERIC Educational Resources Information Center

    Naval Training Equipment Center, Orlando, FL. Training Analysis and Evaluation Group.

    A study began the development and initial testing of a method for predicting cost and training effectiveness of proposed training programs. A prototype Training Effectiveness and Cost Effectiveness Prediction (TECEP) model was developed and tested. The model was a method for optimization of training media allocation on the basis of fixed training…

  8. An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D; Plank, James; Disney, Adam

    2016-01-01

    As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.

  9. Adaptive optimal training of animal behavior

    NASA Astrophysics Data System (ADS)

    Bak, Ji Hyun; Choi, Jung Yoon; Akrami, Athena; Witten, Ilana; Pillow, Jonathan

    Neuroscience experiments often require training animals to perform tasks designed to elicit various sensory, cognitive, and motor behaviors. Training typically involves a series of gradual adjustments of stimulus conditions and rewards in order to bring about learning. However, training protocols are usually hand-designed, and often require weeks or months to achieve a desired level of task performance. Here we combine ideas from reinforcement learning and adaptive optimal experimental design to formulate methods for efficient training of animal behavior. Our work addresses two intriguing problems at once: first, it seeks to infer the learning rules underlying an animal's behavioral changes during training; second, it seeks to exploit these rules to select stimuli that will maximize the rate of learning toward a desired objective. We develop and test these methods using data collected from rats during training on a two-interval sensory discrimination task. We show that we can accurately infer the parameters of a learning algorithm that describes how the animal's internal model of the task evolves over the course of training. We also demonstrate by simulation that our method can provide a substantial speedup over standard training methods.

  10. Optimizing area under the ROC curve using semi-supervised learning

    PubMed Central

    Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M.

    2014-01-01

    Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.1 PMID:25395692

  11. Optimizing area under the ROC curve using semi-supervised learning.

    PubMed

    Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M

    2015-01-01

    Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.

  12. Distance Metric Learning via Iterated Support Vector Machines.

    PubMed

    Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei

    2017-07-11

    Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.

  13. Hybrid simulated annealing and its application to optimization of hidden Markov models for visual speech recognition.

    PubMed

    Lee, Jong-Seok; Park, Cheol Hoon

    2010-08-01

    We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.

  14. Acquisition of Inductive Biconditional Reasoning Skills: Training of Simultaneous and Sequential Processing.

    ERIC Educational Resources Information Center

    Lee, Seong-Soo

    1982-01-01

    Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…

  15. Algorithm design, user interface, and optimization procedure for a fuzzy logic ramp metering algorithm : a training manual for freeway operations engineers

    DOT National Transportation Integrated Search

    2000-02-01

    This training manual describes the fuzzy logic ramp metering algorithm in detail, as implemented system-wide in the greater Seattle area. The method of defining the inputs to the controller and optimizing the performance of the algorithm is explained...

  16. Designing optimal stimuli to control neuronal spike timing

    PubMed Central

    Packer, Adam M.; Yuste, Rafael; Paninski, Liam

    2011-01-01

    Recent advances in experimental stimulation methods have raised the following important computational question: how can we choose a stimulus that will drive a neuron to output a target spike train with optimal precision, given physiological constraints? Here we adopt an approach based on models that describe how a stimulating agent (such as an injected electrical current or a laser light interacting with caged neurotransmitters or photosensitive ion channels) affects the spiking activity of neurons. Based on these models, we solve the reverse problem of finding the best time-dependent modulation of the input, subject to hardware limitations as well as physiologically inspired safety measures, that causes the neuron to emit a spike train that with highest probability will be close to a target spike train. We adopt fast convex constrained optimization methods to solve this problem. Our methods can potentially be implemented in real time and may also be generalized to the case of many cells, suitable for neural prosthesis applications. With the use of biologically sensible parameters and constraints, our method finds stimulation patterns that generate very precise spike trains in simulated experiments. We also tested the intracellular current injection method on pyramidal cells in mouse cortical slices, quantifying the dependence of spiking reliability and timing precision on constraints imposed on the applied currents. PMID:21511704

  17. Building Extraction Based on an Optimized Stacked Sparse Autoencoder of Structure and Training Samples Using LIDAR DSM and Optical Images.

    PubMed

    Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui

    2017-08-24

    In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.

  18. Training set optimization under population structure in genomic selection.

    PubMed

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  19. Efficient design of gain-flattened multi-pump Raman fiber amplifiers using least squares support vector regression

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao

    2018-02-01

    An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.

  20. Selection of appropriate training and validation set chemicals for modelling dermal permeability by U-optimal design.

    PubMed

    Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E

    2013-01-01

    Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].

  1. Fuzzy controller training using particle swarm optimization for nonlinear system control.

    PubMed

    Karakuzu, Cihan

    2008-04-01

    This paper proposes and describes an effective utilization of particle swarm optimization (PSO) to train a Takagi-Sugeno (TS)-type fuzzy controller. Performance evaluation of the proposed fuzzy training method using the obtained simulation results is provided with two samples of highly nonlinear systems: a continuous stirred tank reactor (CSTR) and a Van der Pol (VDP) oscillator. The superiority of the proposed learning technique is that there is no need for a partial derivative with respect to the parameter for learning. This fuzzy learning technique is suitable for real-time implementation, especially if the system model is unknown and a supervised training cannot be run. In this study, all parameters of the controller are optimized with PSO in order to prove that a fuzzy controller trained by PSO exhibits a good control performance.

  2. Optimization study on multiple train formation scheme of urban rail transit

    NASA Astrophysics Data System (ADS)

    Xia, Xiaomei; Ding, Yong; Wen, Xin

    2018-05-01

    The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.

  3. Selection of Hidden Layer Neurons and Best Training Method for FFNN in Application of Long Term Load Forecasting

    NASA Astrophysics Data System (ADS)

    Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj

    2012-05-01

    For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning etc. A new technique for long term load forecasting (LTLF) using optimized feed forward artificial neural network (FFNN) architecture is presented in this paper, which selects optimal number of neurons in the hidden layer as well as the best training method for the case study. The prediction performance of proposed technique is evaluated using mean absolute percentage error (MAPE) of Thailand private electricity consumption and forecasted data. The results obtained are compared with the results of classical auto-regressive (AR) and moving average (MA) methods. It is, in general, observed that the proposed method is prediction wise more accurate.

  4. Optimal design approach for heating irregular-shaped objects in three-dimensional radiant furnaces using a hybrid genetic algorithm-artificial neural network method

    NASA Astrophysics Data System (ADS)

    Darvishvand, Leila; Kamkari, Babak; Kowsary, Farshad

    2018-03-01

    In this article, a new hybrid method based on the combination of the genetic algorithm (GA) and artificial neural network (ANN) is developed to optimize the design of three-dimensional (3-D) radiant furnaces. A 3-D irregular shape design body (DB) heated inside a 3-D radiant furnace is considered as a case study. The uniform thermal conditions on the DB surfaces are obtained by minimizing an objective function. An ANN is developed to predict the objective function value which is trained through the data produced by applying the Monte Carlo method. The trained ANN is used in conjunction with the GA to find the optimal design variables. The results show that the computational time using the GA-ANN approach is significantly less than that of the conventional method. It is concluded that the integration of the ANN with GA is an efficient technique for optimization of the radiant furnaces.

  5. Inventory-transportation integrated optimization for maintenance spare parts of high-speed trains

    PubMed Central

    Wang, Jiaxi; Wang, Huasheng; Wang, Zhongkai; Li, Jian; Lin, Ruixi; Xiao, Jie; Wu, Jianping

    2017-01-01

    This paper presents a 0–1 programming model aimed at obtaining the optimal inventory policy and transportation mode for maintenance spare parts of high-speed trains. To obtain the model parameters for occasionally-replaced spare parts, a demand estimation method based on the maintenance strategies of China’s high-speed railway system is proposed. In addition, we analyse the shortage time using PERT, and then calculate the unit time shortage cost from the viewpoint of train operation revenue. Finally, a real-world case study from Shanghai Depot is conducted to demonstrate our method. Computational results offer an effective and efficient decision support for inventory managers. PMID:28472097

  6. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    PubMed

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  7. Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox

    NASA Astrophysics Data System (ADS)

    Li, R. N.; Liu, X.; Liu, S. J.

    2013-12-01

    In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.

  8. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  9. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  10. Planning a sports training program using Adaptive Particle Swarm Optimization with emphasis on physiological constraints.

    PubMed

    Kumyaito, Nattapon; Yupapin, Preecha; Tamee, Kreangsak

    2018-01-08

    An effective training plan is an important factor in sports training to enhance athletic performance. A poorly considered training plan may result in injury to the athlete, and overtraining. Good training plans normally require expert input, which may have a cost too great for many athletes, particularly amateur athletes. The objectives of this research were to create a practical cycling training plan that substantially improves athletic performance while satisfying essential physiological constraints. Adaptive Particle Swarm Optimization using ɛ-constraint methods were used to formulate such a plan and simulate the likely performance outcomes. The physiological constraints considered in this study were monotony, chronic training load ramp rate and daily training impulse. A comparison of results from our simulations against a training plan from British Cycling, which we used as our standard, showed that our training plan outperformed the benchmark in terms of both athletic performance and satisfying all physiological constraints.

  11. Optimized multiple linear mappings for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  12. Preferred Learning Styles of Professional Undergraduate and Graduate Athletic Training Students

    ERIC Educational Resources Information Center

    Thon, Sarah; Hansen, Pamela

    2015-01-01

    Context: Recognizing the preferred learning style of professional undergraduate and graduate athletic training students will equip educators to more effectively improve their teaching methods and optimize student learning. Objective: To determine the preferred learning style of professional undergraduate and graduate athletic training students…

  13. Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, M.; Hu, N. Q.; Qin, G. J.

    2011-07-01

    In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

  14. Methodical and technological aspects of creation of interactive computer learning systems

    NASA Astrophysics Data System (ADS)

    Vishtak, N. M.; Frolov, D. A.

    2017-01-01

    The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.

  15. Optimizing support vector machine learning for semi-arid vegetation mapping by using clustering analysis

    NASA Astrophysics Data System (ADS)

    Su, Lihong

    In remote sensing communities, support vector machine (SVM) learning has recently received increasing attention. SVM learning usually requires large memory and enormous amounts of computation time on large training sets. According to SVM algorithms, the SVM classification decision function is fully determined by support vectors, which compose a subset of the training sets. In this regard, a solution to optimize SVM learning is to efficiently reduce training sets. In this paper, a data reduction method based on agglomerative hierarchical clustering is proposed to obtain smaller training sets for SVM learning. Using a multiple angle remote sensing dataset of a semi-arid region, the effectiveness of the proposed method is evaluated by classification experiments with a series of reduced training sets. The experiments show that there is no loss of SVM accuracy when the original training set is reduced to 34% using the proposed approach. Maximum likelihood classification (MLC) also is applied on the reduced training sets. The results show that MLC can also maintain the classification accuracy. This implies that the most informative data instances can be retained by this approach.

  16. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    PubMed

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.

  17. Trip optimization system and method for a train

    DOEpatents

    Kumar, Ajith Kuttannair; Shaffer, Glenn Robert; Houpt, Paul Kenneth; Movsichoff, Bernardo Adrian; Chan, David So Keung

    2017-08-15

    A system for operating a train having one or more locomotive consists with each locomotive consist comprising one or more locomotives, the system including a locator element to determine a location of the train, a track characterization element to provide information about a track, a sensor for measuring an operating condition of the locomotive consist, a processor operable to receive information from the locator element, the track characterizing element, and the sensor, and an algorithm embodied within the processor having access to the information to create a trip plan that optimizes performance of the locomotive consist in accordance with one or more operational criteria for the train.

  18. Designing a composite correlation filter based on iterative optimization of training images for distortion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.

    2017-06-01

    We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.

  19. A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine

    PubMed Central

    Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini

    2013-01-01

    A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136

  20. Modeling level change in Lake Urmia using hybrid artificial intelligence approaches

    NASA Astrophysics Data System (ADS)

    Esbati, M.; Ahmadieh Khanesar, M.; Shahzadi, Ali

    2017-06-01

    The investigation of water level fluctuations in lakes for protecting them regarding the importance of these water complexes in national and regional scales has found a special place among countries in recent years. The importance of the prediction of water level balance in Lake Urmia is necessary due to several-meter fluctuations in the last decade which help the prevention from possible future losses. For this purpose, in this paper, the performance of adaptive neuro-fuzzy inference system (ANFIS) for predicting the lake water level balance has been studied. In addition, for the training of the adaptive neuro-fuzzy inference system, particle swarm optimization (PSO) and hybrid backpropagation-recursive least square method algorithm have been used. Moreover, a hybrid method based on particle swarm optimization and recursive least square (PSO-RLS) training algorithm for the training of ANFIS structure is introduced. In order to have a more fare comparison, hybrid particle swarm optimization and gradient descent are also applied. The models have been trained, tested, and validated based on lake level data between 1991 and 2014. For performance evaluation, a comparison is made between these methods. Numerical results obtained show that the proposed methods with a reasonable error have a good performance in water level balance prediction. It is also clear that with continuing the current trend, Lake Urmia will experience more drop in the water level balance in the upcoming years.

  1. Bilevel Model-Based Discriminative Dictionary Learning for Recognition.

    PubMed

    Zhou, Pan; Zhang, Chao; Lin, Zhouchen

    2017-03-01

    Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.

  2. Optimized mixed Markov models for motif identification

    PubMed Central

    Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping

    2006-01-01

    Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929

  3. Analysis of Artificial Neural Network Backpropagation Using Conjugate Gradient Fletcher Reeves In The Predicting Process

    NASA Astrophysics Data System (ADS)

    Wanto, Anjar; Zarlis, Muhammad; Sawaluddin; Hartama, Dedy

    2017-12-01

    Backpropagation is a good artificial neural network algorithm used to predict, one of which is to predict the rate of Consumer Price Index (CPI) based on the foodstuff sector. While conjugate gradient fletcher reeves is a suitable optimization method when juxtaposed with backpropagation method, because this method can shorten iteration without reducing the quality of training and testing result. Consumer Price Index (CPI) data that will be predicted to come from the Central Statistics Agency (BPS) Pematangsiantar. The results of this study will be expected to contribute to the government in making policies to improve economic growth. In this study, the data obtained will be processed by conducting training and testing with artificial neural network backpropagation by using parameter learning rate 0,01 and target error minimum that is 0.001-0,09. The training network is built with binary and bipolar sigmoid activation functions. After the results with backpropagation are obtained, it will then be optimized using the conjugate gradient fletcher reeves method by conducting the same training and testing based on 5 predefined network architectures. The result, the method used can increase the speed and accuracy result.

  4. Relabeling exchange method (REM) for learning in neural networks

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  5. Aerodynamic design on high-speed trains

    NASA Astrophysics Data System (ADS)

    Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li

    2016-04-01

    Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.

  6. A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2001-01-01

    An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.

  7. Method of multi-mode vibration control for the carbody of high-speed electric multiple unit trains

    NASA Astrophysics Data System (ADS)

    Gong, Dao; Zhou, Jinsong; Sun, Wenjing; Sun, Yu; Xia, Zhanghui

    2017-11-01

    A method of multi-mode vibration control for the carbody of high-speed electric multiple unit (EMU) trains by using the onboard and suspended equipments as dynamic vibration absorbers (DVAs) is proposed. The effect of the multi-mode vibration on the ride quality of a high-speed EMU train was studied, and the target modes of vibration control were determined. An equivalent mass identification method was used to determine the equivalent mass for the target modes at the device installation positions. To optimize the vibration acceleration response of the carbody, the natural frequencies and damping ratios of the lateral and vertical vibration were designed based on the theory of dynamic vibration absorption. In order to realize the optimized design values of the natural frequencies for the lateral and vertical vibrations simultaneously, a new type of vibration absorber was designed in which a belleville spring and conventional rubber parts are connected in parallel. This design utilizes the negative stiffness of the belleville spring. Results show that, as compared to rigid equipment connections, the proposed method effectively reduces the multi-mode vibration of a carbody in a high-speed EMU train, thereby achieving the control objectives. The ride quality in terms of the lateral and vertical vibration of the carbody is considerably improved. Moreover, the optimal value of the damping ratio is effective in dissipating the vibration energy, which reduces the vibration of both the carbody and the equipment.

  8. Front-End Analysis Methods for the Noncommissioned Officer Education System

    DTIC Science & Technology

    2013-02-01

    The Noncommissioned Officer Education System plays a crucial role in Soldier development by providing both institutional training and structured-self...created challenges with maintaining currency of institutional training . Questions have arisen regarding the optimal placement of tasks as their...relevance changes, especially considering the resources required to update institutional training . An analysis was conducted to identify the

  9. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  10. Training-based descreening.

    PubMed

    Siddiqui, Hasib; Bouman, Charles A

    2007-03-01

    Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.

  11. Periodized Nutrition for Athletes.

    PubMed

    Jeukendrup, Asker E

    2017-03-01

    It is becoming increasingly clear that adaptations, initiated by exercise, can be amplified or reduced by nutrition. Various methods have been discussed to optimize training adaptations and some of these methods have been subject to extensive study. To date, most methods have focused on skeletal muscle, but it is important to note that training effects also include adaptations in other tissues (e.g., brain, vasculature), improvements in the absorptive capacity of the intestine, increases in tolerance to dehydration, and other effects that have received less attention in the literature. The purpose of this review is to define the concept of periodized nutrition (also referred to as nutritional training) and summarize the wide variety of methods available to athletes. The reader is referred to several other recent review articles that have discussed aspects of periodized nutrition in much more detail with primarily a focus on adaptations in the muscle. The purpose of this review is not to discuss the literature in great detail but to clearly define the concept and to give a complete overview of the methods available, with an emphasis on adaptations that are not in the muscle. Whilst there is good evidence for some methods, other proposed methods are mere theories that remain to be tested. 'Periodized nutrition' refers to the strategic combined use of exercise training and nutrition, or nutrition only, with the overall aim to obtain adaptations that support exercise performance. The term nutritional training is sometimes used to describe the same methods and these terms can be used interchangeably. In this review, an overview is given of some of the most common methods of periodized nutrition including 'training low' and 'training high', and training with low- and high-carbohydrate availability, respectively. 'Training low' in particular has received considerable attention and several variations of 'train low' have been proposed. 'Training-low' studies have generally shown beneficial effects in terms of signaling and transcription, but to date, few studies have been able to show any effects on performance. In addition to 'train low' and 'train high', methods have been developed to 'train the gut', train hypohydrated (to reduce the negative effects of dehydration), and train with various supplements that may increase the training adaptations longer term. Which of these methods should be used depends on the specific goals of the individual and there is no method (or diet) that will address all needs of an individual in all situations. Therefore, appropriate practical application lies in the optimal combination of different nutritional training methods. Some of these methods have already found their way into training practices of athletes, even though evidence for their efficacy is sometimes scarce at best. Many pragmatic questions remain unanswered and another goal of this review is to identify some of the remaining questions that may have great practical relevance and should be the focus of future research.

  12. Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen

    2017-08-29

    In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.

  13. Managing simulation-based training: A framework for optimizing learning, cost, and time

    NASA Astrophysics Data System (ADS)

    Richmond, Noah Joseph

    This study provides a management framework for optimizing training programs for learning, cost, and time when using simulation based training (SBT) and reality based training (RBT) as resources. Simulation is shown to be an effective means for implementing activity substitution as a way to reduce risk. The risk profile of 22 US Air Force vehicles are calculated, and the potential risk reduction is calculated under the assumption of perfect substitutability of RBT and SBT. Methods are subsequently developed to relax the assumption of perfect substitutability. The transfer effectiveness ratio (TER) concept is defined and modeled as a function of the quality of the simulator used, and the requirements of the activity trained. The Navy F/A-18 is then analyzed in a case study illustrating how learning can be maximized subject to constraints in cost and time, and also subject to the decision maker's preferences for the proportional and absolute use of simulation. Solution methods for optimizing multiple activities across shared resources are next provided. Finally, a simulation strategy including an operations planning program (OPP), an implementation program (IP), an acquisition program (AP), and a pedagogical research program (PRP) is detailed. The study provides the theoretical tools to understand how to leverage SBT, a case study demonstrating these tools' efficacy, and a set of policy recommendations to enable the US military to better utilize SBT in the future.

  14. Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation

    NASA Astrophysics Data System (ADS)

    Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2015-03-01

    During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.

  15. Evaluating Methods of Updating Training Data in Long-Term Genomewide Selection

    PubMed Central

    Neyhart, Jeffrey L.; Tiede, Tyler; Lorenz, Aaron J.; Smith, Kevin P.

    2017-01-01

    Genomewide selection is hailed for its ability to facilitate greater genetic gains per unit time. Over breeding cycles, the requisite linkage disequilibrium (LD) between quantitative trait loci and markers is expected to change as a result of recombination, selection, and drift, leading to a decay in prediction accuracy. Previous research has identified the need to update the training population using data that may capture new LD generated over breeding cycles; however, optimal methods of updating have not been explored. In a barley (Hordeum vulgare L.) breeding simulation experiment, we examined prediction accuracy and response to selection when updating the training population each cycle with the best predicted lines, the worst predicted lines, both the best and worst predicted lines, random lines, criterion-selected lines, or no lines. In the short term, we found that updating with the best predicted lines or the best and worst predicted lines resulted in high prediction accuracy and genetic gain, but in the long term, all methods (besides not updating) performed similarly. We also examined the impact of including all data in the training population or only the most recent data. Though patterns among update methods were similar, using a smaller but more recent training population provided a slight advantage in prediction accuracy and genetic gain. In an actual breeding program, a breeder might desire to gather phenotypic data on lines predicted to be the best, perhaps to evaluate possible cultivars. Therefore, our results suggest that an optimal method of updating the training population is also very practical. PMID:28315831

  16. Extracting physicochemical features to predict protein secondary structure.

    PubMed

    Huang, Yin-Fu; Chen, Shu-Ying

    2013-01-01

    We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances.

  17. Extracting Physicochemical Features to Predict Protein Secondary Structure

    PubMed Central

    Chen, Shu-Ying

    2013-01-01

    We propose a protein secondary structure prediction method based on position-specific scoring matrix (PSSM) profiles and four physicochemical features including conformation parameters, net charges, hydrophobic, and side chain mass. First, the SVM with the optimal window size and the optimal parameters of the kernel function is found. Then, we train the SVM using the PSSM profiles generated from PSI-BLAST and the physicochemical features extracted from the CB513 data set. Finally, we use the filter to refine the predicted results from the trained SVM. For all the performance measures of our method, Q 3 reaches 79.52, SOV94 reaches 86.10, and SOV99 reaches 74.60; all the measures are higher than those of the SVMpsi method and the SVMfreq method. This validates that considering these physicochemical features in predicting protein secondary structure would exhibit better performances. PMID:23766688

  18. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less

  19. Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.

    PubMed

    Heydari, Ali; Balakrishnan, Sivasubramanya N

    2013-01-01

    To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.

  20. Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers

    PubMed Central

    2018-01-01

    Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512

  1. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.

  2. Novel maximum-margin training algorithms for supervised neural networks.

    PubMed

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.

  3. Optimization of multiply acquired magnetic flux density B(z) using ICNE-Multiecho train in MREIT.

    PubMed

    Nam, Hyun Soo; Kwon, Oh In

    2010-05-07

    The aim of magnetic resonance electrical impedance tomography (MREIT) is to visualize the electrical properties, conductivity or current density of an object by injection of current. Recently, the prolonged data acquisition time when using the injected current nonlinear encoding (ICNE) method has been advantageous for measurement of magnetic flux density data, Bz, for MREIT in the signal-to-noise ratio (SNR). However, the ICNE method results in undesirable side artifacts, such as blurring, chemical shift and phase artifacts, due to the long data acquisition under an inhomogeneous static field. In this paper, we apply the ICNE method to a gradient and spin echo (GRASE) multi-echo train pulse sequence in order to provide the multiple k-space lines during a single RF pulse period. We analyze the SNR of the measured multiple B(z) data using the proposed ICNE-Multiecho MR pulse sequence. By determining a weighting factor for B(z) data in each of the echoes, an optimized inversion formula for the magnetic flux density data is proposed for the ICNE-Multiecho MR sequence. Using the ICNE-Multiecho method, the quality of the measured magnetic flux density is considerably increased by the injection of a long current through the echo train length and by optimization of the voxel-by-voxel noise level of the B(z) value. Agarose-gel phantom experiments have demonstrated fewer artifacts and a better SNR using the ICNE-Multiecho method. Experimenting with the brain of an anesthetized dog, we collected valuable echoes by taking into account the noise level of each of the echoes and determined B(z) data by determining optimized weighting factors for the multiply acquired magnetic flux density data.

  4. Method for generating a plasma wave to accelerate electrons

    DOEpatents

    Umstadter, D.; Esarey, E.; Kim, J.K.

    1997-06-10

    The invention provides a method and apparatus for generating large amplitude nonlinear plasma waves, driven by an optimized train of independently adjustable, intense laser pulses. In the method, optimal pulse widths, interpulse spacing, and intensity profiles of each pulse are determined for each pulse in a series of pulses. A resonant region of the plasma wave phase space is found where the plasma wave is driven most efficiently by the laser pulses. The accelerator system of the invention comprises several parts: the laser system, with its pulse-shaping subsystem; the electron gun system, also called beam source, which preferably comprises photo cathode electron source and RF-LINAC accelerator; electron photo-cathode triggering system; the electron diagnostics; and the feedback system between the electron diagnostics and the laser system. The system also includes plasma source including vacuum chamber, magnetic lens, and magnetic field means. The laser system produces a train of pulses that has been optimized to maximize the axial electric field amplitude of the plasma wave, and thus the electron acceleration, using the method of the invention. 21 figs.

  5. Method for generating a plasma wave to accelerate electrons

    DOEpatents

    Umstadter, Donald; Esarey, Eric; Kim, Joon K.

    1997-01-01

    The invention provides a method and apparatus for generating large amplitude nonlinear plasma waves, driven by an optimized train of independently adjustable, intense laser pulses. In the method, optimal pulse widths, interpulse spacing, and intensity profiles of each pulse are determined for each pulse in a series of pulses. A resonant region of the plasma wave phase space is found where the plasma wave is driven most efficiently by the laser pulses. The accelerator system of the invention comprises several parts: the laser system, with its pulse-shaping subsystem; the electron gun system, also called beam source, which preferably comprises photo cathode electron source and RF-LINAC accelerator; electron photo-cathode triggering system; the electron diagnostics; and the feedback system between the electron diagnostics and the laser system. The system also includes plasma source including vacuum chamber, magnetic lens, and magnetic field means. The laser system produces a train of pulses that has been optimized to maximize the axial electric field amplitude of the plasma wave, and thus the electron acceleration, using the method of the invention.

  6. A Comparison of Staff Training Methods for Effective Implementation of Discrete Trial Teaching for Learners with Developmental Disabilities

    ERIC Educational Resources Information Center

    Geiger, Kaneen Barbara

    2012-01-01

    Discrete trial teaching is an effective procedure for teaching a variety of skills to children with autism. However, it must be implemented with high integrity to produce optimal learning. Behavioral Skills Training (BST) is a staff training procedure that has been demonstrated to be effective. However, BST is time and labor intensive, and with…

  7. Signature Tracking for Optimized Nutrition and Training (STRONG)

    DTIC Science & Technology

    2014-08-01

    manufacture , use, or sell any patented invention that may relate to them. This report was cleared for public release by the 88th Air Base Wing Public...Opportunities for performance augmentation include continuous performance feedback for self- improvement and individualized training regimens in the...the application of advanced physical training methods used by elite athletes to improve fitness in Special Operators; such as functional weight

  8. Cost Optimization in E-Learning-Based Education Systems: Implementation and Learning Sequence

    ERIC Educational Resources Information Center

    Fazlollahtabar, Hamed; Yousefpoor, Narges

    2009-01-01

    Increasing the effectiveness of e-learning has become one of the most practically and theoretically important issues within both educational engineering and information system fields. The development of information technologies has contributed to growth in online training as an important education method. The online training environment enables…

  9. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  10. A hybrid linear/nonlinear training algorithm for feedforward neural networks.

    PubMed

    McLoone, S; Brown, M D; Irwin, G; Lightbody, A

    1998-01-01

    This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

  11. A Scientific Rationale to Improve Resistance Training Prescription in Exercise Oncology.

    PubMed

    Fairman, Ciaran M; Zourdos, Michael C; Helms, Eric R; Focht, Brian C

    2017-08-01

    To date, the prevailing evidence in the field of exercise oncology supports the safety and efficacy of resistance training to attenuate many oncology treatment-related adverse effects, such as risk for cardiovascular disease, increased fatigue, and diminished physical functioning and quality of life. Moreover, findings in the extant literature supporting the benefits of exercise for survivors of and patients with cancer have resulted in the release of exercise guidelines from several international agencies. However, despite research progression and international recognition, current exercise oncology-based exercise prescriptions remain relatively basic and underdeveloped, particularly in regards to resistance training. Recent publications have called for a more precise manipulation of training variables such as volume, intensity, and frequency (i.e., periodization), given the large heterogeneity of a cancer population, to truly optimize clinically relevant patient-reported outcomes. Indeed, increased attention to integrating fundamental principles of exercise physiology into the exercise prescription process could optimize the safety and efficacy of resistance training during cancer care. The purpose of this article is to give an overview of the current state of resistance training prescription and discuss novel methods that can contribute to improving approaches to exercise prescription. We hope this article may facilitate further evaluation of best practice regarding resistance training prescription, monitoring, and modification to ultimately optimize the efficacy of integrating resistance training as a supportive care intervention for survivors or and patients with cancer.

  12. Simulation of Earth textures by conditional image quilting

    NASA Astrophysics Data System (ADS)

    Mahmud, K.; Mariethoz, G.; Caers, J.; Tahmasebi, P.; Baker, A.

    2014-04-01

    Training image-based approaches for stochastic simulations have recently gained attention in surface and subsurface hydrology. This family of methods allows the creation of multiple realizations of a study domain, with a spatial continuity based on a training image (TI) that contains the variability, connectivity, and structural properties deemed realistic. A major drawback of these methods is their computational and/or memory cost, making certain applications challenging. It was found that similar methods, also based on training images or exemplars, have been proposed in computer graphics. One such method, image quilting (IQ), is introduced in this paper and adapted for hydrogeological applications. The main difficulty is that Image Quilting was originally not designed to produce conditional simulations and was restricted to 2-D images. In this paper, the original method developed in computer graphics has been modified to accommodate conditioning data and 3-D problems. This new conditional image quilting method (CIQ) is patch based, does not require constructing a pattern databases, and can be used with both categorical and continuous training images. The main concept is to optimally cut the patches such that they overlap with minimum discontinuity. The optimal cut is determined using a dynamic programming algorithm. Conditioning is accomplished by prior selection of patches that are compatible with the conditioning data. The performance of CIQ is tested for a variety of hydrogeological test cases. The results, when compared with previous multiple-point statistics (MPS) methods, indicate an improvement in CPU time by a factor of at least 50.

  13. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    NASA Astrophysics Data System (ADS)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  14. DHSpred: support-vector-machine-based human DNase I hypersensitive sites prediction using the optimal features selected by random forest.

    PubMed

    Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang

    2018-01-05

    DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html.

  15. DHSpred: support-vector-machine-based human DNase I hypersensitive sites prediction using the optimal features selected by random forest

    PubMed Central

    Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang

    2018-01-01

    DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html PMID:29416743

  16. Template optimization and transfer in perceptual learning.

    PubMed

    Kurki, Ilmari; Hyvärinen, Aapo; Saarinen, Jussi

    2016-08-01

    We studied how learning changes the processing of a low-level Gabor stimulus, using a classification-image method (psychophysical reverse correlation) and a task where observers discriminated between slight differences in the phase (relative alignment) of a target Gabor in visual noise. The method estimates the internal "template" that describes how the visual system weights the input information for decisions. One popular idea has been that learning makes the template more like an ideal Bayesian weighting; however, the evidence has been indirect. We used a new regression technique to directly estimate the template weight change and to test whether the direction of reweighting is significantly different from an optimal learning strategy. The subjects trained the task for six daily sessions, and we tested the transfer of training to a target in an orthogonal orientation. Strong learning and partial transfer were observed. We tested whether task precision (difficulty) had an effect on template change and transfer: Observers trained in either a high-precision (small, 60° phase difference) or a low-precision task (180°). Task precision did not have an effect on the amount of template change or transfer, suggesting that task precision per se does not determine whether learning generalizes. Classification images show that training made observers use more task-relevant features and unlearn some irrelevant features. The transfer templates resembled partially optimized versions of templates in training sessions. The template change direction resembles ideal learning significantly but not completely. The amount of template change was highly correlated with the amount of learning.

  17. Tabu search and binary particle swarm optimization for feature selection using microarray data.

    PubMed

    Chuang, Li-Yeh; Yang, Cheng-Huei; Yang, Cheng-Hong

    2009-12-01

    Gene expression profiles have great potential as a medical diagnosis tool because they represent the state of a cell at the molecular level. In the classification of cancer type research, available training datasets generally have a fairly small sample size compared to the number of genes involved. This fact poses an unprecedented challenge to some classification methodologies due to training data limitations. Therefore, a good selection method for genes relevant for sample classification is needed to improve the predictive accuracy, and to avoid incomprehensibility due to the large number of genes investigated. In this article, we propose to combine tabu search (TS) and binary particle swarm optimization (BPSO) for feature selection. BPSO acts as a local optimizer each time the TS has been run for a single generation. The K-nearest neighbor method with leave-one-out cross-validation and support vector machine with one-versus-rest serve as evaluators of the TS and BPSO. The proposed method is applied and compared to the 11 classification problems taken from the literature. Experimental results show that our method simplifies features effectively and either obtains higher classification accuracy or uses fewer features compared to other feature selection methods.

  18. Adaptations to a New Physical Training Program in the Combat Controller Training Pipeline

    DTIC Science & Technology

    2010-09-01

    education regarding optimizing recovery through hydration and nutrition . We designed and implemented a short class that explained the benefits of pre...to poor nutrition and hydration practices. Finally, many of the training methods employed throughout the pipeline were outdated, non-periodized, and...contributing to overtraining. Creation of a nutrition and hydration class. Apart from being told to drink copious amounts of water, trainees had little

  19. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  20. An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.

    PubMed

    Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco

    2017-04-01

    In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.

  1. Counseling for the Training of Leaders and Leadership Development: A Commentary

    ERIC Educational Resources Information Center

    Barreto, Alfonso

    2012-01-01

    Counseling is the instrument that empowers training and forges the development of leaders in their essential drive to inspire and guide others. As much a discipline and praxis as a professional practice, counseling increases consciousness and optimizes the management and synergy of human energy. This article addresses methods for sustaining…

  2. SKYNET: an efficient and robust neural network training tool for machine learning in astronomy

    NASA Astrophysics Data System (ADS)

    Graff, Philip; Feroz, Farhan; Hobson, Michael P.; Lasenby, Anthony

    2014-06-01

    We present the first public release of our generic neural network training algorithm, called SKYNET. This efficient and robust machine learning tool is able to train large and deep feed-forward neural networks, including autoencoders, for use in a wide range of supervised and unsupervised learning applications, such as regression, classification, density estimation, clustering and dimensionality reduction. SKYNET uses a `pre-training' method to obtain a set of network parameters that has empirically been shown to be close to a good solution, followed by further optimization using a regularized variant of Newton's method, where the level of regularization is determined and adjusted automatically; the latter uses second-order derivative information to improve convergence, but without the need to evaluate or store the full Hessian matrix, by using a fast approximate method to calculate Hessian-vector products. This combination of methods allows for the training of complicated networks that are difficult to optimize using standard backpropagation techniques. SKYNET employs convergence criteria that naturally prevent overfitting, and also includes a fast algorithm for estimating the accuracy of network outputs. The utility and flexibility of SKYNET are demonstrated by application to a number of toy problems, and to astronomical problems focusing on the recovery of structure from blurred and noisy images, the identification of gamma-ray bursters, and the compression and denoising of galaxy images. The SKYNET software, which is implemented in standard ANSI C and fully parallelized using MPI, is available at http://www.mrao.cam.ac.uk/software/skynet/.

  3. The influence of negative training set size on machine learning-based virtual screening.

    PubMed

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  4. The influence of negative training set size on machine learning-based virtual screening

    PubMed Central

    2014-01-01

    Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867

  5. Fourier spatial frequency analysis for image classification: training the training set

    NASA Astrophysics Data System (ADS)

    Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart

    2016-04-01

    The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.

  6. A novel heterogeneous training sample selection method on space-time adaptive processing

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  7. Application of Particle Swarm Optimization Algorithm for Optimizing ANN Model in Recognizing Ripeness of Citrus

    NASA Astrophysics Data System (ADS)

    Diyana Rosli, Anis; Adenan, Nur Sabrina; Hashim, Hadzli; Ezan Abdullah, Noor; Sulaiman, Suhaimi; Baharudin, Rohaiza

    2018-03-01

    This paper shows findings of the application of Particle Swarm Optimization (PSO) algorithm in optimizing an Artificial Neural Network that could categorize between ripeness and unripeness stage of citrus suhuensis. The algorithm would adjust the network connections weights and adapt its values during training for best results at the output. Initially, citrus suhuensis fruit’s skin is measured using optically non-destructive method via spectrometer. The spectrometer would transmit VIS (visible spectrum) photonic light radiation to the surface (skin of citrus) of the sample. The reflected light from the sample’s surface would be received and measured by the same spectrometer in terms of reflectance percentage based on VIS range. These measured data are used to train and test the best optimized ANN model. The accuracy is based on receiver operating characteristic (ROC) performance. The result outcomes from this investigation have shown that the achieved accuracy for the optimized is 70.5% with a sensitivity and specificity of 60.1% and 80.0% respectively.

  8. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    PubMed

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  9. Optimizing the positional relationships between instruments used in laparoscopic simulation using a simple trigonometric method.

    PubMed

    Lorias Espinoza, Daniel; Ordorica Flores, Ricardo; Minor Martínez, Arturo; Gutiérrez Gnecchi, José Antonio

    2014-06-01

    Various methods for evaluating laparoscopic skill have been reported, but without detailed information on the configuration used they are difficult to reproduce. Here we present a method based on the trigonometric relationships between the instruments used in a laparoscopic training platform in order to provide a tool to aid in the reproducible assessment of surgical laparoscopic technique. The positions of the instruments were represented using triangles. Basic trigonometry was used to objectively establish the distances among the working ports RL, the placement of the optical port h', and the placement of the surgical target OT. The optimal configuration of a training platform depends on the selected working angles, the intracorporeal/extracorporeal lengths of the instrument, and the depth of the surgical target. We demonstrate that some distances, angles, and positions of the instruments are inappropriate for satisfactory laparoscopy. By applying basic trigonometric principles we can determine the ideal placement of the working ports and the optics in a simple, precise, and objective way. In addition, because the method is based on parameters known to be important in both the performance and quantitative quality of laparoscopy, the results are generalizable to different training platforms and types of laparoscopic surgery.

  10. Teaching and assessing procedural skills using simulation: metrics and methodology.

    PubMed

    Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C

    2008-11-01

    Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.

  11. Virtual reality and the traditional method for phlebotomy training among college of nursing students in Kuwait: implications for nursing education and practice.

    PubMed

    Vidal, Victoria L; Ohaeri, Beatrice M; John, Pamela; Helen, Delles

    2013-01-01

    This quasi-experimental study, with a control group and experimental group, compares the effectiveness of virtual reality simulators on developing phlebotomy skills of nursing students with the effectiveness of traditional methods of teaching. Performance of actual phlebotomy on a live client was assessed after training, using a standardized form. Findings showed that students who were exposed to the virtual reality simulator performed better in the following performance metrics: pain factor, hematoma formation, and number of reinsertions. This study confirms that the use of the virtual reality-based system to supplement the traditional method may be the optimal program for training.

  12. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles

    PubMed Central

    2016-01-01

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  13. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  14. A Methodology for Optimizing the Training and Utilization of Physical Therapy Personnel.

    ERIC Educational Resources Information Center

    Dumas, Neil S.; Muthard, John E.

    A method for analyzing the work in a department of physical therapy was devised and applied in a teaching hospital. Physical therapists, trained as observer-investigators, helped refine the coding system and were able to reliably record job behavior in the physical therapy department. The nature of the therapist's and aide's job was described and…

  15. Modification and optimization of the united-residue (UNRES) potential-energy function for canonical simulations. I. Temperature dependence of the effective energy function and tests of the optimization method with single training proteins

    PubMed Central

    Liwo, Adam; Khalili, Mey; Czaplewski, Cezary; Kalinowski, Sebastian; Ołdziej, Stanisław; Wachucik, Katarzyna; Scheraga, Harold A.

    2011-01-01

    We report the modification and parameterization of the united-residue (UNRES) force field for energy-based protein-structure prediction and protein-folding simulations. We tested the approach on three training proteins separately: 1E0L (β), 1GAB (α), and 1E0G (α + β). Heretofore, the UNRES force field had been designed and parameterized to locate native-like structures of proteins as global minima of their effective potential-energy surfaces, which largely neglected the conformational entropy because decoys composed of only lowest-energy conformations were used to optimize the force field. Recently, we developed a mesoscopic dynamics procedure for UNRES, and applied it with success to simulate protein folding pathways. How ever, the force field turned out to be largely biased towards α-helical structures in canonical simulations because the conformational entropy had been neglected in the parameterization. We applied the hierarchical optimization method developed in our earlier work to optimize the force field, in which the conformational space of a training protein is divided into levels each corresponding to a certain degree of native-likeness. The levels are ordered according to increasing native-likeness; level 0 corresponds to structures with no native-like elements and the highest level corresponds to the fully native-like structures. The aim of optimization is to achieve the order of the free energies of levels, decreasing as their native-likeness increases. The procedure is iterative, and decoys of the training protein(s) generated with the energy-function parameters of the preceding iteration are used to optimize the force field in a current iteration. We applied the multiplexing replica exchange molecular dynamics (MREMD) method, recently implemented in UNRES, to generate decoys; with this modification, conformational entropy is taken into account. Moreover, we optimized the free-energy gaps between levels at temperatures corresponding to a predominance of folded or unfolded structures, as well as to structures at the putative folding-transition temperature, changing the sign of the gaps at the transition temperature. This enabled us to obtain force fields characterized by a single peak in the heat capacity at the transition temperature. Furthermore, we introduced temperature dependence to the UNRES force field; this is consistent with the fact that it is a free-energy and not a potential-energy function. PMID:17201450

  16. Computer-assisted resilience training to prepare healthcare workers for pandemic influenza: a randomized trial of the optimal dose of training

    PubMed Central

    2010-01-01

    Background Working in a hospital during an extraordinary infectious disease outbreak can cause significant stress and contribute to healthcare workers choosing to reduce patient contact. Psychological training of healthcare workers prior to an influenza pandemic may reduce stress-related absenteeism, however, established training methods that change behavior and attitudes are too resource-intensive for widespread use. This study tests the feasibility and effectiveness of a less expensive alternative - an interactive, computer-assisted training course designed to build resilience to the stresses of working during a pandemic. Methods A "dose-finding" study compared pre-post changes in three different durations of training. We measured variables that are likely to mediate stress-responses in a pandemic before and after training: confidence in support and training, pandemic-related self-efficacy, coping style and interpersonal problems. Results 158 hospital workers took the course and were randomly assigned to the short (7 sessions, median cumulative duration 111 minutes), medium (12 sessions, 158 minutes) or long (17 sessions, 223 minutes) version. Using an intention-to-treat analysis, the course was associated with significant improvements in confidence in support and training, pandemic self-efficacy and interpersonal problems. Participants who under-utilized coping via problem-solving or seeking support or over-utilized escape-avoidance experienced improved coping. Comparison of doses showed improved interpersonal problems in the medium and long course but not in the short course. There was a trend towards higher drop-out rates with longer duration of training. Conclusions Computer-assisted resilience training in healthcare workers appears to be of significant benefit and merits further study under pandemic conditions. Comparing three "doses" of the course suggested that the medium course was optimal. PMID:20307302

  17. Navy Recruit Training Optimization, Post 1980. Phase I: Current Assessment and Concept for the Future.

    DTIC Science & Technology

    1976-05-01

    subjective in nature , -it provides a practical method for analyzing a mass of data, including data which can be utilized to predict probable future... nature and administered when the individual student is unable to maintain acceptable perfornance during the training cycle. Service-wide remedial...are directly related to the curriculum topics of recruit training. Others are of a broader nature related to general Navy problo•is which present a

  18. A Discriminative Sentence Compression Method as Combinatorial Optimization Problem

    NASA Astrophysics Data System (ADS)

    Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki

    In the study of automatic summarization, the main research topic was `important sentence extraction' but nowadays `sentence compression' is a hot research topic. Conventional sentence compression methods usually transform a given sentence into a parse tree or a dependency tree, and modify them to get a shorter sentence. However, this method is sometimes too rigid. In this paper, we regard sentence compression as an combinatorial optimization problem that extracts an optimal subsequence of words. Hori et al. also proposed a similar method, but they used only a small number of features and their weights were tuned by hand. We introduce a large number of features such as part-of-speech bigrams and word position in the sentence. Furthermore, we train the system by discriminative learning. According to our experiments, our method obtained better score than other methods with statistical significance.

  19. Team Training (Training at Own Facility) versus Individual Surgeon's Training (Training at Trainer's Facility) When Implementing a New Surgical Technique: Example from the ONSTEP Inguinal Hernia Repair

    PubMed Central

    Laursen, Jannie

    2014-01-01

    Background. When implementing a new surgical technique, the best method for didactic learning has not been settled. There are basically two scenarios: the trainee goes to the teacher's clinic and learns the new technique hands-on, or the teacher goes to the trainee's clinic and performs the teaching there. Methods. An informal literature review was conducted to provide a basis for discussing pros and cons. We also wanted to discuss how many surgeons can be trained in a day and the importance of the demand for a new surgical procedure to ensure a high adoption rate and finally to apply these issues on a discussion of barriers for adoption of the new ONSTEP technique for inguinal hernia repair after initial training. Results and Conclusions. The optimal training method would include moving the teacher to the trainee's department to obtain team-training effects simultaneous with surgical technical training of the trainee surgeon. The training should also include a theoretical presentation and discussion along with the practical training. Importantly, the training visit should probably be followed by a scheduled visit to clear misunderstandings and fine-tune the technique after an initial self-learning period. PMID:25506078

  20. Optimal Physical Training During Military Basic Training Period.

    PubMed

    Santtila, Matti; Pihlainen, Kai; Viskari, Jarmo; Kyröläinen, Heikki

    2015-11-01

    The goal for military basic training (BT) is to create a foundation for physical fitness and military skills of soldiers. Thereafter, more advanced military training can safely take place. Large differences in the initial physical performance of conscripts or recruits have led military units to develop more safe and effective training programs. The purpose of this review article was to describe the limiting factors of optimal physical training during the BT period. This review revealed that the high volume of low-intensity physical activity combined with endurance-type military training (like combat training, prolonged physical activity, and field shooting) during BT interferes with optimal development of maximal oxygen uptake and muscle strength of the soldiers. Therefore, more progressive, periodized, and individualized training programs are needed. In conclusion, optimal training programs lead to higher training responses and lower risks for injuries and overloading.

  1. Advancing hypoxic training in team sports: from intermittent hypoxic training to repeated sprint training in hypoxia.

    PubMed

    Faiss, Raphaël; Girard, Olivier; Millet, Grégoire P

    2013-12-01

    Over the past two decades, intermittent hypoxic training (IHT), that is, a method where athletes live at or near sea level but train under hypoxic conditions, has gained unprecedented popularity. By adding the stress of hypoxia during 'aerobic' or 'anaerobic' interval training, it is believed that IHT would potentiate greater performance improvements compared to similar training at sea level. A thorough analysis of studies including IHT, however, leads to strikingly poor benefits for sea-level performance improvement, compared to the same training method performed in normoxia. Despite the positive molecular adaptations observed after various IHT modalities, the characteristics of optimal training stimulus in hypoxia are still unclear and their functional translation in terms of whole-body performance enhancement is minimal. To overcome some of the inherent limitations of IHT (lower training stimulus due to hypoxia), recent studies have successfully investigated a new training method based on the repetition of short (<30 s) 'all-out' sprints with incomplete recoveries in hypoxia, the so-called repeated sprint training in hypoxia (RSH). The aims of the present review are therefore threefold: first, to summarise the main mechanisms for interval training and repeated sprint training in normoxia. Second, to critically analyse the results of the studies involving high-intensity exercises performed in hypoxia for sea-level performance enhancement by differentiating IHT and RSH. Third, to discuss the potential mechanisms underpinning the effectiveness of those methods, and their inherent limitations, along with the new research avenues surrounding this topic.

  2. Methods of Establishing Occupational Skill Structure of Admissions in the System of Vocational Education

    ERIC Educational Resources Information Center

    Kosorukov, Oleg A.; Makarov, Alexander N.; Bagisbayev, Karmak B.

    2016-01-01

    The purpose of the study is to determine the business need for vocational training. This article gives a detailed analysis of the problem aimed at finding optimal occupational skill structure of training, which involves all kinds of positive effects in various areas of public life--from the economy up to the spiritual sphere of human life.…

  3. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    PubMed

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  5. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue

    2015-04-01

    Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

  6. Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya

    2003-01-01

    The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.

  7. Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems

    PubMed Central

    Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.

    2016-01-01

    Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383

  8. Material discovery by combining stochastic surface walking global optimization with a neural network.

    PubMed

    Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan

    2017-09-01

    While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.

  9. Development of Training Programs to Optimize Planetary Ambulation

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Mulavara, A. P.; Peters, B. T.; Cohen, H. S.; Miller, C. A.; Brady, R.; Warren, L. E.; Rutley, T. M.; Kozlovskaya, I. B.

    2007-01-01

    Astronauts experience disturbances in functional mobility following their return to Earth due to adaptive responses that occur during exposure to the microgravity conditions of space flight. Despite significant time spent performing in-flight exercise routines, these training programs have not been able to mitigate postflight alterations in postural and locomotor function. Therefore, the goal of our two inter-related projects (NSBRI-ground based and ISS flight study, "Mobility") is to develop and test gait training programs that will serve to optimize functional mobility during the adaptation period immediately following space flight, thereby improving the safety and efficiency of planetary ambulation. The gait training program entails manipulating the sensory conditions of treadmill exercise to systematically challenge the balance and gait control system. This enhances the overall adaptability of locomotor function enabling rapid reorganization of gait control to respond to ambulation in different gravitational environments. To develop the training program, we are conducting a series of ground-based studies evaluating the training efficacy associated with variation in visual flow, body loading, and support surface stability during treadmill walking. We will also determine the optimal method to present training stimuli within and across training sessions to maximize both the efficacy and efficiency of the training procedure. Results indicate that variations in both visual flow and body unloading during treadmill walking leads to modification in locomotor control and can be used as effective training modalities. Additionally, the composition and timing of sensory challenges experienced during each training session has significant impact on the ability to rapidly reorganize locomotor function when exposed to a novel sensory environment. We have developed the capability of producing support surface variation during gait training by mounting a treadmill on a six-degree-of-freedom motion device. This hardware development will allow us to evaluate the efficacy of this type of training in conjunction with variation in visual flow and body unloading.

  10. Depression training in nursing homes: lessons learned from a pilot study.

    PubMed

    Smith, Marianne; Stolder, Mary Ellen; Jaggers, Benjamin; Liu, Megan Fang; Haedtke, Chris

    2013-02-01

    Late-life depression is common among nursing home residents, but often is not addressed by nurses. Using a self-directed CD-based depression training program, this pilot study used mixed methods to assess feasibility issues, determine nurse perceptions of training, and evaluate depression-related outcomes among residents in usual care and training conditions. Of 58 nurses enrolled, 24 completed the training and gave it high ratings. Outcomes for 50 residents include statistically significant reductions in depression severity over time (p < 0.001) among all groups. Depression training is an important vehicle to improve depression recognition and daily nursing care, but diverse factors must be addressed to assure optimal outcomes.

  11. Depression Training in Nursing Homes: Lessons Learned from a Pilot Study

    PubMed Central

    Smith, Marianne; Stolder, Mary Ellen; Jaggers, Benjamin; Liu, Megan; Haedke, Chris

    2014-01-01

    Late-life depression is common among nursing home residents, but often is not addressed by nurses. Using a self-directed, CD-based depression training program, this pilot study used mixed methods to assess feasibility issues, determine nurse perceptions of training, and evaluate depression-related outcomes among residents in usual care and training conditions. Of 58 nurses enrolled, 24 completed the training and gave it high ratings. Outcomes for 50 residents include statistically significant reductions in depression severity over time (p<0.001) among all groups. Depression training is an important vehicle to improve depression recognition and daily nursing care, but diverse factors must be addressed to assure optimal outcomes. PMID:23369120

  12. The Future of General Surgery: Evolving to Meet a Changing Practice.

    PubMed

    Webber, Eric M; Ronson, Ashley R; Gorman, Lisa J; Taber, Sarah A; Harris, Kenneth A

    2016-01-01

    Similar to other countries, the practice of General Surgery in Canada has undergone significant evolution over the past 30 years without major changes to the training model. There is growing concern that current General Surgery residency training does not provide the skills required to practice the breadth of General Surgery in all Canadian communities and practice settings. Led by a national Task Force on the Future of General Surgery, this project aimed to develop recommendations on the optimal configuration of General Surgery training in Canada. A series of 4 evidence-based sub-studies and a national survey were launched to inform these recommendations. Generalized findings from the multiple methods of the project speak to the complexity of the current practice of General Surgery: (1) General surgeons have very different practice patterns depending on the location of practice; (2) General Surgery training offers strong preparation for overall clinical competence; (3) Subspecialized training is a new reality for today's general surgeons; and (4) Generation of the report and recommendations for the future of General Surgery. A total of 4 key recommendations were developed to optimize General Surgery for the 21st century. This project demonstrated that a high variability of practice dependent on location contrasts with the principles of implementing the same objectives of training for all General Surgery graduates. The overall results of the project have prompted the Royal College to review the training requirements and consider a more "fit for purpose" training scheme, thus ensuring that General Surgery residency training programs would optimally prepare residents for a broad range of practice settings and locations across Canada. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Maximum Margin Clustering of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  14. Effect of a road safety training program on drivers' comparative optimism.

    PubMed

    Perrissol, Stéphane; Smeding, Annique; Laumond, Francis; Le Floch, Valérie

    2011-01-01

    Reducing comparative optimism regarding risk perceptions in traffic accidents has been proven to be particularly difficult (Delhomme, 2000). This is unfortunate because comparative optimism is assumed to impede preventive action. The present study tested whether a road safety training course could reduce drivers' comparative optimism in high control situations. Results show that the training course efficiently reduced comparative optimism in high control, but not in low control situations. Mechanisms underlying this finding and implications for the design of road safety training courses are discussed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Assessing and minimizing contamination in time of flight based validation data

    NASA Astrophysics Data System (ADS)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  16. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  17. Study on loading path optimization of internal high pressure forming process

    NASA Astrophysics Data System (ADS)

    Jiang, Shufeng; Zhu, Hengda; Gao, Fusheng

    2017-09-01

    In the process of internal high pressure forming, there is no formula to describe the process parameters and forming results. The article use numerical simulation to obtain several input parameters and corresponding output result, use the BP neural network to found their mapping relationship, and with weighted summing method make each evaluating parameters to set up a formula which can evaluate quality. Then put the training BP neural network into the particle swarm optimization, and take the evaluating formula of the quality as adapting formula of particle swarm optimization, finally do the optimization and research at the range of each parameters. The results show that the parameters obtained by the BP neural network algorithm and the particle swarm optimization algorithm can meet the practical requirements. The method can solve the optimization of the process parameters in the internal high pressure forming process.

  18. Advancing hypoxic training in team sports: from intermittent hypoxic training to repeated sprint training in hypoxia

    PubMed Central

    Faiss, Raphaël; Girard, Olivier; Millet, Grégoire P

    2013-01-01

    Over the past two decades, intermittent hypoxic training (IHT), that is, a method where athletes live at or near sea level but train under hypoxic conditions, has gained unprecedented popularity. By adding the stress of hypoxia during ‘aerobic’ or ‘anaerobic’ interval training, it is believed that IHT would potentiate greater performance improvements compared to similar training at sea level. A thorough analysis of studies including IHT, however, leads to strikingly poor benefits for sea-level performance improvement, compared to the same training method performed in normoxia. Despite the positive molecular adaptations observed after various IHT modalities, the characteristics of optimal training stimulus in hypoxia are still unclear and their functional translation in terms of whole-body performance enhancement is minimal. To overcome some of the inherent limitations of IHT (lower training stimulus due to hypoxia), recent studies have successfully investigated a new training method based on the repetition of short (<30 s) ‘all-out’ sprints with incomplete recoveries in hypoxia, the so-called repeated sprint training in hypoxia (RSH). The aims of the present review are therefore threefold: first, to summarise the main mechanisms for interval training and repeated sprint training in normoxia. Second, to critically analyse the results of the studies involving high-intensity exercises performed in hypoxia for sea-level performance enhancement by differentiating IHT and RSH. Third, to discuss the potential mechanisms underpinning the effectiveness of those methods, and their inherent limitations, along with the new research avenues surrounding this topic. PMID:24282207

  19. Neural-net Processed Electronic Holography for Rotating Machines

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2003-01-01

    This report presents the results of an R&D effort to apply neural-net processed electronic holography to NDE of rotors. Electronic holography was used to generate characteristic patterns or mode shapes of vibrating rotors and rotor components. Artificial neural networks were trained to identify damage-induced changes in the characteristic patterns. The development and optimization of a neural-net training method were the most significant contributions of this work, and the training method and its optimization are discussed in detail. A second positive result was the assembly and testing of a fiber-optic holocamera. A major disappointment was the inadequacy of the high-speed-holography hardware selected for this effort, but the use of scaled holograms to match the low effective resolution of an image intensifier was one interesting attempt to compensate. This report also discusses in some detail the physics and environmental requirements for rotor electronic holography. The major conclusions were that neural-net and electronic-holography inspections of stationary components in the laboratory and the field are quite practical and worthy of continuing development, but that electronic holography of moving rotors is still an expensive high-risk endeavor.

  20. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  1. Minimum energy control for a two-compartment neuron to extracellular electric fields

    NASA Astrophysics Data System (ADS)

    Yi, Guo-Sheng; Wang, Jiang; Li, Hui-Yan; Wei, Xi-Le; Deng, Bin

    2016-11-01

    The energy optimization of extracellular electric field (EF) stimulus for a neuron is considered in this paper. We employ the optimal control theory to design a low energy EF input for a reduced two-compartment model. It works by driving the neuron to closely track a prescriptive spike train. A cost function is introduced to balance the contradictory objectives, i.e., tracking errors and EF stimulus energy. By using the calculus of variations, we transform the minimization of cost function to a six-dimensional two-point boundary value problem (BVP). Through solving the obtained BVP in the cases of three fundamental bifurcations, it is shown that the control method is able to provide an optimal EF stimulus of reduced energy for the neuron to effectively track a prescriptive spike train. Further, the feasibility of the adopted method is interpreted from the point of view of the biophysical basis of spike initiation. These investigations are conducive to designing stimulating dose for extracellular neural stimulation, which are also helpful to interpret the effects of extracellular field on neural activity.

  2. Accelerated Training for Large Feedforward Neural Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.

  3. Empowering Education: A New Model for In-service Training of Nursing Staff

    PubMed Central

    CHAGHARI, MAHMUD; SAFFARI, MOHSEN; EBADI, ABBAS; AMERYOUN, AHMAD

    2017-01-01

    Introduction: In-service training of nurses plays an indispensable role in improving the quality of inpatient care. Need to enhance the effectiveness of in-service training of nurses is an inevitable requirement. This study attempted to design a new optimal model for in-service training of nurses. Methods: This qualitative study was conducted in two stages during 2015-2016. In the first stage, the Grounded Theory was adopted to explore the process of training 35 participating nurses. The sampling was initially purposeful and then theoretically based on emerging concept. Data were collected through interview, observation and field notes. Moreover, the data were analyzed through Corbin-Strauss method and the data were coded through MAXQDA-10. In the second stage, the findings were employed through ’Walker and Avants strategy for theory construction so as to design an optimal model for in-service training of nursing staff. Results: In the first stage, there were five major themes including unsuccessful mandatory education, empowering education, organizational challenges of education, poor educational management, and educational-occupational resiliency. Empowering education was the core variable derived from the research, based on which a grounded theory was proposed. The new empowering education model was composed of self-directed learning and practical learning. There are several strategies to achieve empowering education, including the fostering of searching skills, clinical performance monitoring, motivational factors, participation in the design and implementation, and problem-solving approach. Conclusion: Empowering education is a new model for in-service training of nurses, which matches the training programs with andragogical needs and desirability of learning among the staff. Owing to its practical nature, the empowering education can facilitate occupational tasks and achieving greater mastery of professional skills among the nurses. PMID:28180130

  4. Real-time energy-saving metro train rescheduling with primary delay identification

    PubMed Central

    Li, Keping; Schonfeld, Paul

    2018-01-01

    This paper aims to reschedule online metro trains in delay scenarios. A graph representation and a mixed integer programming model are proposed to formulate the optimization problem. The solution approach is a two-stage optimization method. In the first stage, based on a proposed train state graph and system analysis, the primary and flow-on delays are specifically analyzed and identified with a critical path algorithm. For the second stage a hybrid genetic algorithm is designed to optimize the schedule, with the delay identification results as input. Then, based on the infrastructure data of Beijing Subway Line 4 of China, case studies are presented to demonstrate the effectiveness and efficiency of the solution approach. The results show that the algorithm can quickly and accurately identify primary delays among different types of delays. The economic cost of energy consumption and total delay is considerably reduced (by more than 10% in each case). The computation time of the Hybrid-GA is low enough for rescheduling online. Sensitivity analyses further demonstrate that the proposed approach can be used as a decision-making support tool for operators. PMID:29474471

  5. Comparison and optimization of machine learning methods for automated classification of circulating tumor cells.

    PubMed

    Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J

    2016-10-01

    Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  6. SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy

    PubMed Central

    Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui

    2014-01-01

    Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063

  7. Design of the multicenter standardized supervised exercise training intervention for the claudication: exercise vs endoluminal revascularization (CLEVER) study.

    PubMed

    Bronas, Ulf G; Hirsch, Alan T; Murphy, Timothy; Badenhop, Dalynn; Collins, Tracie C; Ehrman, Jonathan K; Ershow, Abby G; Lewis, Beth; Treat-Jacobson, Diane J; Walsh, M Eileen; Oldenburg, Niki; Regensteiner, Judith G

    2009-11-01

    The CLaudication: Exercise Vs Endoluminal Revascularization (CLEVER) study is the first randomized, controlled, clinical, multicenter trial that is evaluating a supervised exercise program compared with revascularization procedures to treat claudication. In this report, the methods and dissemination techniques of the supervised exercise training intervention are described. A total of 217 participants are being recruited and randomized to one of three arms: (1) optimal medical care; (2) aortoiliac revascularization with stent; or (3) supervised exercise training. Of the enrolled patients, 84 will receive supervised exercise therapy. Supervised exercise will be administered according to a protocol designed by a central CLEVER exercise training committee based on validated methods previously used in single center randomized control trials. The protocol will be implemented at each site by an exercise committee member using training methods developed and standardized by the exercise training committee. The exercise training committee reviews progress and compliance with the protocol of each participant weekly. In conclusion, a multicenter approach to disseminate the supervised exercise training technique and to evaluate its efficacy, safety and cost-effectiveness for patients with claudication due to peripheral arterial disease (PAD) is being evaluated for the first time in CLEVER. The CLEVER study will further establish the role of supervised exercise training in the treatment of claudication resulting from PAD and provide standardized methods for use of supervised exercise training in future PAD clinical trials as well as in clinical practice.

  8. Identifying presence of correlated errors in GRACE monthly harmonic coefficients using machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sra, Gurveer; Karantaidis, George; Sideris, Michael G.

    2017-04-01

    A new method for identifying correlated errors in Gravity Recovery and Climate Experiment (GRACE) monthly harmonic coefficients has been developed and tested. Correlated errors are present in the differences between monthly GRACE solutions, and can be suppressed using a de-correlation filter. In principle, the de-correlation filter should be implemented only on coefficient series with correlated errors to avoid losing useful geophysical information. In previous studies, two main methods of implementing the de-correlation filter have been utilized. In the first one, the de-correlation filter is implemented starting from a specific minimum order until the maximum order of the monthly solution examined. In the second one, the de-correlation filter is implemented only on specific coefficient series, the selection of which is based on statistical testing. The method proposed in the present study exploits the capabilities of supervised machine learning algorithms such as neural networks and support vector machines (SVMs). The pattern of correlated errors can be described by several numerical and geometric features of the harmonic coefficient series. The features of extreme cases of both correlated and uncorrelated coefficients are extracted and used for the training of the machine learning algorithms. The trained machine learning algorithms are later used to identify correlated errors and provide the probability of a coefficient series to be correlated. Regarding SVMs algorithms, an extensive study is performed with various kernel functions in order to find the optimal training model for prediction. The selection of the optimal training model is based on the classification accuracy of the trained SVM algorithm on the same samples used for training. Results show excellent performance of all algorithms with a classification accuracy of 97% - 100% on a pre-selected set of training samples, both in the validation stage of the training procedure and in the subsequent use of the trained algorithms to classify independent coefficients. This accuracy is also confirmed by the external validation of the trained algorithms using the hydrology model GLDAS NOAH. The proposed method meet the requirement of identifying and de-correlating only coefficients with correlated errors. Also, there is no need of applying statistical testing or other techniques that require prior de-correlation of the harmonic coefficients.

  9. Effects of Training Attendance on Muscle Strength of Young Men after 11 Weeks of Resistance Training

    PubMed Central

    Gentil, Paulo; Bottaro, Martim

    2013-01-01

    Purpose Training attendance is an important variable for attaining optimal results after a resistance training (RT) program, however, the association of attendance with the gains of muscle strength is not well defined. Therefore, the purpose of the present study is to verify if attendance would affect muscle strength gains in healthy young males. Methods Ninety two young males with no previous RT experience volunteered to participate in the study. RT was performed 2 days a week for 11 weeks. One repetition maximum (1RM) in the bench press and knee extensors peak torque (PT) were measured before and after the training period. After the training period, a two step cluster analysis was used to classify the participants in accordance to training attendance, resulting in three groups, defined as high (92 to 100%), intermediate (80 to 91%) and low (60 to 79%) training attendance. Results According to the results, there were no significant correlations between strength gains and training attendance, however, when attendance groups were compared, the low training attendance group showed lower increases in 1RM bench press (8.8%) than the other two groups (17.6% and 18.0% for high and intermediate attendance, respectively). Conclusions Although there is not a direct correlation between training attendance and muscle strength gains, it is suggested that a minimum attendance of 80% is necessary to ensure optimal gains in upper body strength. PMID:23802051

  10. Coronary artery segmentation in X-ray angiograms using gabor filters and differential evolution.

    PubMed

    Cervantes-Sanchez, Fernando; Cruz-Aceves, Ivan; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Cordova-Fraga, Teodoro; Aviña-Cervantes, Juan Gabriel

    2018-08-01

    Segmentation of coronary arteries in X-ray angiograms represents an essential task for computer-aided diagnosis, since it can help cardiologists in diagnosing and monitoring vascular abnormalities. Due to the main disadvantages of the X-ray angiograms are the nonuniform illumination, and the weak contrast between blood vessels and image background, different vessel enhancement methods have been introduced. In this paper, a novel method for blood vessel enhancement based on Gabor filters tuned using the optimization strategy of Differential evolution (DE) is proposed. Because the Gabor filters are governed by three different parameters, the optimal selection of those parameters is highly desirable in order to maximize the vessel detection rate while reducing the computational cost of the training stage. To obtain the optimal set of parameters for the Gabor filters, the area (Az) under the receiver operating characteristics curve is used as objective function. In the experimental results, the proposed method achieves an A z =0.9388 in a training set of 40 images, and for a test set of 40 images it obtains the highest performance with an A z =0.9538 compared with six state-of-the-art vessel detection methods. Finally, the proposed method achieves an accuracy of 0.9423 for vessel segmentation using the test set. In addition, the experimental results have also shown that the proposed method can be highly suitable for clinical decision support in terms of computational time and vessel segmentation performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Distributed Wireless Power Transfer With Energy Feedback

    NASA Astrophysics Data System (ADS)

    Lee, Seunghyun; Zhang, Rui

    2017-04-01

    Energy beamforming (EB) is a key technique for achieving efficient radio-frequency (RF) transmission enabled wireless energy transfer (WET). By optimally designing the waveforms from multiple energy transmitters (ETs) over the wireless channels, they can be constructively combined at the energy receiver (ER) to achieve an EB gain that scales with the number of ETs. However, the optimal design of EB waveforms requires accurate channel state information (CSI) at the ETs, which is challenging to obtain practically, especially in a distributed system with ETs at separate locations. In this paper, we study practical and efficient channel training methods to achieve optimal EB in a distributed WET system. We propose two protocols with and without centralized coordination, respectively, where distributed ETs either sequentially or in parallel adapt their transmit phases based on a low-complexity energy feedback from the ER. The energy feedback only depends on the received power level at the ER, where each feedback indicates one particular transmit phase that results in the maximum harvested power over a set of previously used phases. Simulation results show that the two proposed training protocols converge very fast in practical WET systems even with a large number of distributed ETs, while the protocol with sequential ET phase adaptation is also analytically shown to converge to the optimal EB design with perfect CSI by increasing the training time. Numerical results are also provided to evaluate the performance of the proposed distributed EB and training designs as compared to other benchmark schemes.

  12. Localization and identification of structural nonlinearities using cascaded optimization and neural networks

    NASA Astrophysics Data System (ADS)

    Koyuncu, A.; Cigeroglu, E.; Özgüven, H. N.

    2017-10-01

    In this study, a new approach is proposed for identification of structural nonlinearities by employing cascaded optimization and neural networks. Linear finite element model of the system and frequency response functions measured at arbitrary locations of the system are used in this approach. Using the finite element model, a training data set is created, which appropriately spans the possible nonlinear configurations space of the system. A classification neural network trained on these data sets then localizes and determines the types of all nonlinearities associated with the nonlinear degrees of freedom in the system. A new training data set spanning the parametric space associated with the determined nonlinearities is created to facilitate parametric identification. Utilizing this data set, initially, a feed forward regression neural network is trained, which parametrically identifies the classified nonlinearities. Then, the results obtained are further improved by carrying out an optimization which uses network identified values as starting points. Unlike identification methods available in literature, the proposed approach does not require data collection from the degrees of freedoms where nonlinear elements are attached, and furthermore, it is sufficiently accurate even in the presence of measurement noise. The application of the proposed approach is demonstrated on an example system with nonlinear elements and on a real life experimental setup with a local nonlinearity.

  13. Classification of Parkinson's disease utilizing multi-edit nearest-neighbor and ensemble learning algorithms with speech samples.

    PubMed

    Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang

    2016-11-16

    The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.

  14. Mathematical models of human paralyzed muscle after long-term training.

    PubMed

    Law, L A Frey; Shields, R K

    2007-01-01

    Spinal cord injury (SCI) results in major musculoskeletal adaptations, including muscle atrophy, faster contractile properties, increased fatigability, and bone loss. The use of functional electrical stimulation (FES) provides a method to prevent paralyzed muscle adaptations in order to sustain force-generating capacity. Mathematical muscle models may be able to predict optimal activation strategies during FES, however muscle properties further adapt with long-term training. The purpose of this study was to compare the accuracy of three muscle models, one linear and two nonlinear, for predicting paralyzed soleus muscle force after exposure to long-term FES training. Further, we contrasted the findings between the trained and untrained limbs. The three models' parameters were best fit to a single force train in the trained soleus muscle (N=4). Nine additional force trains (test trains) were predicted for each subject using the developed models. Model errors between predicted and experimental force trains were determined, including specific muscle force properties. The mean overall error was greatest for the linear model (15.8%) and least for the nonlinear Hill Huxley type model (7.8%). No significant error differences were observed between the trained versus untrained limbs, although model parameter values were significantly altered with training. This study confirmed that nonlinear models most accurately predict both trained and untrained paralyzed muscle force properties. Moreover, the optimized model parameter values were responsive to the relative physiological state of the paralyzed muscle (trained versus untrained). These findings are relevant for the design and control of neuro-prosthetic devices for those with SCI.

  15. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGES

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  16. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  17. Factors That Influence the Rating of Perceived Exertion After Endurance Training.

    PubMed

    Roos, Lilian; Taube, Wolfgang; Tuch, Carolin; Frei, Klaus Michael; Wyss, Thomas

    2018-03-15

    Session rating of perceived exertion (sRPE) is an often used measure to assess athletes' training load. However, little is known which factors could optimize the quality of data collection thereof. The aim of the present study was to investigate the effects of (i) the survey methods and (ii) the time points when sRPE was assessed on the correlation between subjective (sRPE) and objective (heart rate training impulse; TRIMP) assessment of training load. In the first part, 45 well-trained subjects (30 men, 15 women) performed 20 running sessions with a heart rate monitor and reported sRPE 30 minutes after training cessation. For the reporting the subjects were grouped into three survey method groups (paper-pencil, online questionnaire, and mobile device). In the second part of the study, another 40 athletes (28 men, 12 women) performed 4x5 running sessions with the four time points to report the sRPE randomly assigned (directly after training cessation, 30 minutes post-exercise, in the evening of the same day, the next morning directly after waking up). The assessment of sRPE is influenced by time point, survey method, TRIMP, sex, and training type. It is recommended to assess sRPE values via a mobile device or online tool, as the survey method "paper" displayed lower correlations between sRPE and TRIMP. Subjective training load measures are highly individual. When compared at the same relative intensity, lower sRPE values were reported by women, for the training types representing slow runs, and for time points with greater duration between training cessation and sRPE assessment. The assessment method for sRPE should be kept constant for each athlete and comparisons between athletes or sexes are not recommended.

  18. Adaptive neuron-to-EMG decoder training for FES neuroprostheses

    NASA Astrophysics Data System (ADS)

    Ethier, Christian; Acuna, Daniel; Solla, Sara A.; Miller, Lee E.

    2016-08-01

    Objective. We have previously demonstrated a brain-machine interface neuroprosthetic system that provided continuous control of functional electrical stimulation (FES) and restoration of grasp in a primate model of spinal cord injury (SCI). Predicting intended EMG directly from cortical recordings provides a flexible high-dimensional control signal for FES. However, no peripheral signal such as force or EMG is available for training EMG decoders in paralyzed individuals. Approach. Here we present a method for training an EMG decoder in the absence of muscle activity recordings; the decoder relies on mapping behaviorally relevant cortical activity to the inferred EMG activity underlying an intended action. Monkeys were trained at a 2D isometric wrist force task to control a computer cursor by applying force in the flexion, extension, ulnar, and radial directions and execute a center-out task. We used a generic muscle force-to-endpoint force model based on muscle pulling directions to relate each target force to an optimal EMG pattern that attained the target force while minimizing overall muscle activity. We trained EMG decoders during the target hold periods using a gradient descent algorithm that compared EMG predictions to optimal EMG patterns. Main results. We tested this method both offline and online. We quantified both the accuracy of offline force predictions and the ability of a monkey to use these real-time force predictions for closed-loop cursor control. We compared both offline and online results to those obtained with several other direct force decoders, including an optimal decoder computed from concurrently measured neural and force signals. Significance. This novel approach to training an adaptive EMG decoder could make a brain-control FES neuroprosthesis an effective tool to restore the hand function of paralyzed individuals. Clinical implementation would make use of individualized EMG-to-force models. Broad generalization could be achieved by including data from multiple grasping tasks in the training of the neuron-to-EMG decoder. Our approach would make it possible for persons with SCI to grasp objects with their own hands, using near-normal motor intent.

  19. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  20. A conceptual model for worksite intelligent physical exercise training - IPET - intervention for decreasing life style health risk indicators among employees: a randomized controlled trial

    PubMed Central

    2014-01-01

    Background Health promotion at the work site in terms of physical activity has proven positive effects but optimization of relevant exercise training protocols and implementation for high adherence are still scanty. Methods/Design The aim of this paper is to present a study protocol with a conceptual model for planning the optimal individually tailored physical exercise training for each worker based on individual health check, existing guidelines and state of the art sports science training recommendations in the broad categories of cardiorespiratory fitness, muscle strength in specific body parts, and functional training including balance training. The hypotheses of this research are that individually tailored worksite-based intelligent physical exercise training, IPET, among workers with inactive job categories will: 1) Improve cardiorespiratory fitness and/or individual health risk indicators, 2) Improve muscle strength and decrease musculoskeletal disorders, 3) Succeed in regular adherence to worksite and leisure physical activity training, and 3) Reduce sickness absence and productivity losses (presenteeism) in office workers. The present RCT study enrolled almost 400 employees with sedentary jobs in the private as well as public sectors. The training interventions last 2 years with measures at baseline as well as one and two years follow-up. Discussion If proven effective, the intelligent physical exercise training scheduled as well as the information for its practical implementation can provide meaningful scientifically based information for public health policy. Trial Registration ClinicalTrials.gov, number: NCT01366950. PMID:24964869

  1. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization techniques.

    PubMed

    Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan

    2013-06-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.

  2. Global Optimization Ensemble Model for Classification Methods

    PubMed Central

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  3. A study on optimization of hybrid drive train using Advanced Vehicle Simulator (ADVISOR)

    NASA Astrophysics Data System (ADS)

    Same, Adam; Stipe, Alex; Grossman, David; Park, Jae Wan

    This study investigates the advantages and disadvantages of three hybrid drive train configurations: series, parallel, and "through-the-ground" parallel. Power flow simulations are conducted with the MATLAB/Simulink-based software ADVISOR. These simulations are then applied in an application for the UC Davis SAE Formula Hybrid vehicle. ADVISOR performs simulation calculations for vehicle position using a combined backward/forward method. These simulations are used to study how efficiency and agility are affected by the motor, fuel converter, and hybrid configuration. Three different vehicle models are developed to optimize the drive train of a vehicle for three stages of the SAE Formula Hybrid competition: autocross, endurance, and acceleration. Input cycles are created based on rough estimates of track geometry. The output from these ADVISOR simulations is a series of plots of velocity profile and energy storage State of Charge that provide a good estimate of how the Formula Hybrid vehicle will perform on the given course. The most noticeable discrepancy between the input cycle and the actual velocity profile of the vehicle occurs during deceleration. A weighted ranking system is developed to organize the simulation results and to determine the best drive train configuration for the Formula Hybrid vehicle. Results show that the through-the-ground parallel configuration with front-mounted motors achieves an optimal balance of efficiency, simplicity, and cost. ADVISOR is proven to be a useful tool for vehicle power train design for the SAE Formula Hybrid competition. This vehicle model based on ADVISOR simulation is applicable to various studies concerning performance and efficiency of hybrid drive trains.

  4. Transfer Learning for Class Imbalance Problems with Inadequate Data.

    PubMed

    Al-Stouhi, Samir; Reddy, Chandan K

    2016-07-01

    A fundamental problem in data mining is to effectively build robust classifiers in the presence of skewed data distributions. Class imbalance classifiers are trained specifically for skewed distribution datasets. Existing methods assume an ample supply of training examples as a fundamental prerequisite for constructing an effective classifier. However, when sufficient data is not readily available, the development of a representative classification algorithm becomes even more difficult due to the unequal distribution between classes. We provide a unified framework that will potentially take advantage of auxiliary data using a transfer learning mechanism and simultaneously build a robust classifier to tackle this imbalance issue in the presence of few training samples in a particular target domain of interest. Transfer learning methods use auxiliary data to augment learning when training examples are not sufficient and in this paper we will develop a method that is optimized to simultaneously augment the training data and induce balance into skewed datasets. We propose a novel boosting based instance-transfer classifier with a label-dependent update mechanism that simultaneously compensates for class imbalance and incorporates samples from an auxiliary domain to improve classification. We provide theoretical and empirical validation of our method and apply to healthcare and text classification applications.

  5. A primitive study of voxel feature generation by multiple stacked denoising autoencoders for detecting cerebral aneurysms on MRA

    NASA Astrophysics Data System (ADS)

    Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Ohtomo, Kuni

    2016-03-01

    The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.

  6. Optimizing substance detection by integration of canine-human team with machine technology

    NASA Astrophysics Data System (ADS)

    Prestrude, Al M.; Ternes, J. W.

    1994-02-01

    There are several promising methods and technologies for substance detection. The oldest of these methods is the trained detector or `sniffer' dog. We summarize what is known about the capabilities of dogs in substance detection and recommend comparative testing of the canine- human team with current technology to identify the optimum combination of methods to maximize the detection of explosives and contraband.

  7. Competency in health care management: a training model in epidemiologic methods for assessing and improving the quality of clinical practice through evidence-based decision making.

    PubMed

    Hudak, R P; Jacoby, I; Meyer, G S; Potter, A L; Hooper, T I; Krakauer, H

    1997-01-01

    This article describes a training model that focuses on health care management by applying epidemiologic methods to assess and improve the quality of clinical practice. The model's uniqueness is its focus on integrating clinical evidence-based decision making with fundamental principles of resource management to achieve attainable, cost-effective, high-quality health outcomes. The target students are current and prospective clinical and administrative executives who must optimize decision making at the clinical and managerial levels of health care organizations.

  8. User/Tutor Optimal Learning Path in E-Learning Using Comprehensive Neuro-Fuzzy Approach

    ERIC Educational Resources Information Center

    Fazlollahtabar, Hamed; Mahdavi, Iraj

    2009-01-01

    Internet evolution has affected all industrial, commercial, and especially learning activities in the new context of e-learning. Due to cost, time, or flexibility e-learning has been adopted by participators as an alternative training method. By development of computer-based devices and new methods of teaching, e-learning has emerged. The…

  9. Nutrition and training adaptations in aquatic sports.

    PubMed

    Mujika, Iñigo; Stellingwerff, Trent; Tipton, Kevin

    2014-08-01

    The adaptive response to training is determined by the combination of the intensity, volume, and frequency of the training. Various periodized approaches to training are used by aquatic sports athletes to achieve performance peaks. Nutritional support to optimize training adaptations should take periodization into consideration; that is, nutrition should also be periodized to optimally support training and facilitate adaptations. Moreover, other aspects of training (e.g., overload training, tapering and detraining) should be considered when making nutrition recommendations for aquatic athletes. There is evidence, albeit not in aquatic sports, that restricting carbohydrate availability may enhance some training adaptations. More research needs to be performed, particularly in aquatic sports, to determine the optimal strategy for periodizing carbohydrate intake to optimize adaptations. Protein nutrition is an important consideration for optimal training adaptations. Factors other than the total amount of daily protein intake should be considered. For instance, the type of protein, timing and pattern of protein intake and the amount of protein ingested at any one time influence the metabolic response to protein ingestion. Body mass and composition are important for aquatic sport athletes in relation to power-to-mass and for aesthetic reasons. Protein may be particularly important for athletes desiring to maintain muscle while losing body mass. Nutritional supplements, such as b-alanine and sodium bicarbonate, may have particular usefulness for aquatic athletes' training adaptation.

  10. 4D Cone-beam CT reconstruction using a motion model based on principal component analysis

    PubMed Central

    Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.

    2011-01-01

    Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852

  11. Dissemination of psychosocial treatments for anxiety: the importance of taking a broad perspective.

    PubMed

    Taylor, Steven; Abramowitz, Jonathan S

    2013-12-01

    Dissemination methods are used to increase the likelihood that a given treatment or form of clinical practice is implemented by clinicians in the community. Therapist training in treatment methods is an important component of dissemination. Successful dissemination also requires that roadblocks to treatment implementation are identified and circumvented, such as misconceptions that clinicians might hold about a given treatment. The present article offers a commentary on the papers included in the special issue on treatment dissemination for anxiety disorders. Most papers focus on issues concerning the training and education of clinicians with regard to exposure therapy. Training and education is an important step but should be part of a broad, multifaceted approach. There are several other important methods of treatment dissemination, including methods developed and implemented with success by the pharmaceutical industry, might also be used to disseminate psychosocial therapies. Optimal dissemination likely requires a broad perspective in which multiple dissemination methods are considered for implementation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Computer-assisted generation of individual training concepts for advanced education in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Werner, Teresa; Weckenmann, Albert

    2010-05-01

    Due to increasing requirements on the accuracy and reproducibility of measurement results together with a rapid development of novel technologies for the execution of measurements, there is a high demand for adequately qualified metrologists. Accordingly, a variety of training offers are provided by machine manufacturers, universities and other institutions. Yet, for an interested learner it is very difficult to define an optimal training schedule for his/her individual demands. Therefore, a computer-based assistance tool is developed to support a demand-responsive scheduling of training. Based on the difference between the actual and intended competence profile and under consideration of amending requirements, an optimally customized qualification concept is derived. For this, available training offers are categorized according to different dimensions: regarding contents of the course, but also intended target groups, focus of the imparted competences, implemented methods of learning and teaching, expected constraints for learning and necessary preknowledge. After completing a course, the achieved competences and the transferability of gathered knowledge are evaluated. Based on the results, recommendations for amending measures of learning are provided. Thus, a customized qualification for manufacturing metrology is facilitated, adapted to the specific needs and constraints of each individual learner.

  13. Prediction of Aerodynamic Coefficient using Genetic Algorithm Optimized Neural Network for Sparse Data

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Wind tunnels use scale models to characterize aerodynamic coefficients, Wind tunnel testing can be slow and costly due to high personnel overhead and intensive power utilization. Although manual curve fitting can be done, it is highly efficient to use a neural network to define the complex relationship between variables. Numerical simulation of complex vehicles on the wide range of conditions required for flight simulation requires static and dynamic data. Static data at low Mach numbers and angles of attack may be obtained with simpler Euler codes. Static data of stalled vehicles where zones of flow separation are usually present at higher angles of attack require Navier-Stokes simulations which are costly due to the large processing time required to attain convergence. Preliminary dynamic data may be obtained with simpler methods based on correlations and vortex methods; however, accurate prediction of the dynamic coefficients requires complex and costly numerical simulations. A reliable and fast method of predicting complex aerodynamic coefficients for flight simulation I'S presented using a neural network. The training data for the neural network are derived from numerical simulations and wind-tunnel experiments. The aerodynamic coefficients are modeled as functions of the flow characteristics and the control surfaces of the vehicle. The basic coefficients of lift, drag and pitching moment are expressed as functions of angles of attack and Mach number. The modeled and training aerodynamic coefficients show good agreement. This method shows excellent potential for rapid development of aerodynamic models for flight simulation. Genetic Algorithms (GA) are used to optimize a previously built Artificial Neural Network (ANN) that reliably predicts aerodynamic coefficients. Results indicate that the GA provided an efficient method of optimizing the ANN model to predict aerodynamic coefficients. The reliability of the ANN using the GA includes prediction of aerodynamic coefficients to an accuracy of 110% . In our problem, we would like to get an optimized neural network architecture and minimum data set. This has been accomplished within 500 training cycles of a neural network. After removing training pairs (outliers), the GA has produced much better results. The neural network constructed is a feed forward neural network with a back propagation learning mechanism. The main goal has been to free the network design process from constraints of human biases, and to discover better forms of neural network architectures. The automation of the network architecture search by genetic algorithms seems to have been the best way to achieve this goal.

  14. High-speed trains subject to abrupt braking

    NASA Astrophysics Data System (ADS)

    Tran, Minh Thi; Ang, Kok Keng; Luong, Van Hai; Dai, Jian

    2016-12-01

    The dynamic response of high-speed train subject to braking is investigated using the moving element method. Possible sliding of wheels over the rails is accounted for. The train is modelled as a 15-DOF system comprising of a car body, two bogies and four wheels interconnected by spring-damping units. The rail is modelled as a Euler-Bernoulli beam resting on a two-parameter elastic damped foundation. The interaction between the moving train and track-foundation is accounted for through the normal and tangential wheel-rail contact forces. The effects of braking torque, wheel-rail contact condition, initial train speed and severity of railhead roughness on the dynamic response of the high-speed train are investigated. For a given initial train speed and track irregularity, the study revealed that there is an optimal braking torque that would result in the smallest braking distance with no occurrence of wheel sliding, representing a good compromise between train instability and safety.

  15. Incorporating perceptual decision-making training into high-intensity interval training for Australian football umpires.

    PubMed

    Kittel, Aden; Elsworthy, Nathan; Spittle, Michael

    2018-05-30

    Existing methods for developing decision-making skill for Australian football umpires separate the physical and perceptual aspects of their performance. This study aimed to determine the efficacy of incorporating video-based decision-making training during high-intensity interval training sessions, specific for Australian football umpires. 20 amateur Australian football umpires volunteered to participate in a randomised control trial. Participants completed an 8-week training intervention in a conditioning only (CON; n=7), combined video-based training and conditioning (COM; n=7), or separated conditioning and video-based training (SEP; n=6) group. Preliminary and post-testing involved a Yo-Yo Intermittent Recovery Test (Yo-YoIR1), and 10x300m run test with an Australian football specific video-based decision-making task. Overall, changes in decision-making accuracy following the intervention were unclear between groups. SEP was possibly beneficial compared to COM in Yo-YoIR1 performance, whereas CON was likely beneficial compared to COM in 10x300m sprint performance. There was no additional benefit to completing video-based training, whether combined with, or separate to physical training, suggesting that this was not an optimal training method. For video-based training to be an effective decision-making tool, detailed feedback should be incorporated into training. It is recommended that longer conditioning and video-based training interventions be implemented to determine training effectiveness.

  16. Neural decoding with kernel-based metric learning.

    PubMed

    Brockmeier, Austin J; Choi, John S; Kriminger, Evan G; Francis, Joseph T; Principe, Jose C

    2014-06-01

    In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.

  17. Solving a Higgs optimization problem with quantum annealing for machine learning.

    PubMed

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-18

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  18. Solving a Higgs optimization problem with quantum annealing for machine learning

    NASA Astrophysics Data System (ADS)

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-01

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  19. A new fuzzy-disturbance observer-enhanced sliding controller for vibration control of a train-car suspension with magneto-rheological dampers

    NASA Astrophysics Data System (ADS)

    Nguyen, Sy Dzung; Choi, Seung-Bok; Nguyen, Quoc Hung

    2018-05-01

    Semi-active train-car suspensions are always impacted negatively by uncertainty and disturbance (UAD). In order to deal with this, we propose a novel optimal fuzzy disturbance observer-enhanced sliding mode controller (FDO-SMC) for magneto-rheological damper (MRD)-based semi-active train-car suspensions subjected to UAD whose variability rate may be high but bounded. The two main parts of the FDO-SMC are an adaptive sliding mode controller (ad-SMC) and an optimal fuzzy disturbance observer (op-FDO). As the first step, the initial structures of the sliding mode controller (SMC) and disturbance observer (DO) are built. Adaptive update laws for the SMC and DO are then set up synchronously via Lyapunov stability analysis. Subsequently, an optimal fuzzy system (op-FS) is designed to fully implement a parameter constraint mechanism so as to guarantee the system stability converging to the desired state even if the UAD variability rate increases in a given range. As a result, both the ad-SMC and op-FDO are formulated. It is shown from the comparative work with existing controllers that the proposed method provides the best vibration control capability with relatively low consumed power.

  20. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    NASA Astrophysics Data System (ADS)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods.

  1. Leadership and Teamwork in Trauma and Resuscitation

    PubMed Central

    Ford, Kelsey; Menchine, Michael; Burner, Elizabeth; Arora, Sanjay; Inaba, Kenji; Demetriades, Demetrios; Yersin, Bertrand

    2016-01-01

    Introduction Leadership skills are described by the American College of Surgeons’ Advanced Trauma Life Support (ATLS) course as necessary to provide care for patients during resuscitations. However, leadership is a complex concept, and the tools used to assess the quality of leadership are poorly described, inadequately validated, and infrequently used. Despite its importance, dedicated leadership education is rarely part of physician training programs. The goals of this investigation were the following: 1. Describe how leadership and leadership style affect patient care; 2. Describe how effective leadership is measured; and 3. Describe how to train future physician leaders. Methods We searched the PubMed database using the keywords “leadership” and then either “trauma” or “resuscitation” as title search terms, and an expert in emergency medicine and trauma then identified prospective observational and randomized controlled studies measuring leadership and teamwork quality. Study results were categorized as follows: 1) how leadership affects patient care; 2) which tools are available to measure leadership; and 3) methods to train physicians to become better leaders. Results We included 16 relevant studies in this review. Overall, these studies showed that strong leadership improves processes of care in trauma resuscitation including speed and completion of the primary and secondary surveys. The optimal style and structure of leadership are influenced by patient characteristics and team composition. Directive leadership is most effective when Injury Severity Score (ISS) is high or teams are inexperienced, while empowering leadership is most effective when ISS is low or teams more experienced. Many scales were employed to measure leadership. The Leader Behavior Description Questionnaire (LBDQ) was the only scale used in more than one study. Seven studies described methods for training leaders. Leadership training programs included didactic teaching followed by simulations. Although programs differed in length, intensity, and training level of participants, all programs demonstrated improved team performance. Conclusion Despite the relative paucity of literature on leadership in resuscitations, this review found leadership improves processes of care in trauma and can be enhanced through dedicated training. Future research is needed to validate leadership assessment scales, develop optimal training mechanisms, and demonstrate leadership’s effect on patient-level outcome. PMID:27625718

  2. Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning

    PubMed Central

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent

    2018-01-01

    Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer’s disease and Parkinson’s disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. PMID:28551556

  3. Advanced methods in NDE using machine learning approaches

    NASA Astrophysics Data System (ADS)

    Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank

    2018-04-01

    Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.

  4. A prediction model of drug-induced ototoxicity developed by an optimal support vector machine (SVM) method.

    PubMed

    Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong

    2014-08-01

    Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  6. Designing train-speed trajectory with energy efficiency and service quality

    NASA Astrophysics Data System (ADS)

    Jia, Jiannan; Yang, Kai; Yang, Lixing; Gao, Yuan; Li, Shukai

    2018-05-01

    With the development of automatic train operations, optimal trajectory design is significant to the performance of train operations in railway transportation systems. Considering energy efficiency and service quality, this article formulates a bi-objective train-speed trajectory optimization model to minimize simultaneously the energy consumption and travel time in an inter-station section. This article is distinct from previous studies in that more sophisticated train driving strategies characterized by the acceleration/deceleration gear, the cruising speed, and the speed-shift site are specifically considered. For obtaining an optimal train-speed trajectory which has equal satisfactory degree on both objectives, a fuzzy linear programming approach is applied to reformulate the objectives. In addition, a genetic algorithm is developed to solve the proposed train-speed trajectory optimization problem. Finally, a series of numerical experiments based on a real-world instance of Beijing-Tianjin Intercity Railway are implemented to illustrate the practicability of the proposed model as well as the effectiveness of the solution methodology.

  7. Two neural network algorithms for designing optimal terminal controllers with open final time

    NASA Technical Reports Server (NTRS)

    Plumer, Edward S.

    1992-01-01

    Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.

  8. What Works for You? Using Teacher Feedback to Inform Adaptations of Pivotal Response Training for Classroom Use

    PubMed Central

    Stahmer, Aubyn C.; Suhrheinrich, Jessica; Reed, Sarah; Schreibman, Laura

    2012-01-01

    Several evidence-based practices (EBPs) have been identified as efficacious for the education of students with autism spectrum disorders (ASD). However, effectiveness research has rarely been conducted in schools and teachers express skepticism about the clinical utility of EBPs for the classroom. Innovative methods are needed to optimally adapt EBPs for community use. This study utilizes qualitative methods to identify perceived benefits and barriers of classroom implementation of a specific EBP for ASD, Pivotal Response Training (PRT). Teachers' perspectives on the components of PRT, use of PRT as a classroom intervention strategy, and barriers to the use of PRT were identified through guided discussion. Teachers found PRT valuable; however, they also found some components challenging. Specific teacher recommendations for adaptation and resource development are discussed. This process of obtaining qualitative feedback from frontline practitioners provides a generalizable model for researchers to collaborate with teachers to optimally promote EBPs for classroom use. PMID:23209896

  9. What works for you? Using teacher feedback to inform adaptations of pivotal response training for classroom use.

    PubMed

    Stahmer, Aubyn C; Suhrheinrich, Jessica; Reed, Sarah; Schreibman, Laura

    2012-01-01

    Several evidence-based practices (EBPs) have been identified as efficacious for the education of students with autism spectrum disorders (ASD). However, effectiveness research has rarely been conducted in schools and teachers express skepticism about the clinical utility of EBPs for the classroom. Innovative methods are needed to optimally adapt EBPs for community use. This study utilizes qualitative methods to identify perceived benefits and barriers of classroom implementation of a specific EBP for ASD, Pivotal Response Training (PRT). Teachers' perspectives on the components of PRT, use of PRT as a classroom intervention strategy, and barriers to the use of PRT were identified through guided discussion. Teachers found PRT valuable; however, they also found some components challenging. Specific teacher recommendations for adaptation and resource development are discussed. This process of obtaining qualitative feedback from frontline practitioners provides a generalizable model for researchers to collaborate with teachers to optimally promote EBPs for classroom use.

  10. Active learning based segmentation of Crohns disease from abdominal MRI.

    PubMed

    Mahapatra, Dwarikanath; Vos, Franciscus M; Buhmann, Joachim M

    2016-05-01

    This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Neural Network Prediction of New Aircraft Design Coefficients

    NASA Technical Reports Server (NTRS)

    Norgaard, Magnus; Jorgensen, Charles C.; Ross, James C.

    1997-01-01

    This paper discusses a neural network tool for more effective aircraft design evaluations during wind tunnel tests. Using a hybrid neural network optimization method, we have produced fast and reliable predictions of aerodynamical coefficients, found optimal flap settings, and flap schedules. For validation, the tool was tested on a 55% scale model of the USAF/NASA Subsonic High Alpha Research Concept aircraft (SHARC). Four different networks were trained to predict coefficients of lift, drag, moment of inertia, and lift drag ratio (C(sub L), C(sub D), C(sub M), and L/D) from angle of attack and flap settings. The latter network was then used to determine an overall optimal flap setting and for finding optimal flap schedules.

  12. Training Recurrent Neural Networks With the Levenberg-Marquardt Algorithm for Optimal Control of a Grid-Connected Converter.

    PubMed

    Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo

    2015-09-01

    This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.

  13. Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)

    2016-01-01

    The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.

  14. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  15. Continuous-time adaptive critics.

    PubMed

    Hanselmann, Thomas; Noakes, Lyle; Zaknich, Anthony

    2007-05-01

    A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume.

  16. Tensor Train Neighborhood Preserving Embedding

    NASA Astrophysics Data System (ADS)

    Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin

    2018-05-01

    In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.

  17. Re-designing a mechanism for higher speed: A case history from textile machinery

    NASA Astrophysics Data System (ADS)

    Douglas, S. S.; Rooney, G. T.

    The generation of general mechanism design software which is the formulation of suitable objective functions is discussed. There is a consistent drive towards higher speeds in the development of industrial sewing machines. This led to experimental analyses of dynamic performance and to a search for improved design methods. The experimental work highlighted the need for smoothness of motion at high speed, component inertias, and frame structural stiffness. Smoothness is associated with transmission properties and harmonic analysis. These are added to other design requirements of synchronization, mechanism size, and function. Some of the mechanism trains in overedte sewing machines are shown. All these trains are designed by digital optimization. The design software combines analysis of the sewing machine mechanisms, formulation of objectives innumerical terms, and suitable mathematical optimization ttechniques.

  18. Provider training to screen and initiate evidence-based pediatric obesity treatment in routine practice settings: A randomized pilot trial

    PubMed Central

    Kolko, Rachel P.; Kass, Andrea E.; Hayes, Jacqueline F.; Levine, Michele D.; Garbutt, Jane M.; Proctor, Enola K.; Wilfley, Denise E.

    2016-01-01

    Introduction This randomized pilot trial evaluated two training modalities for first-line, evidence-based pediatric obesity services (screening and goal-setting) among nursing students. Method Participants (N=63) were randomized to Live Interactive Training (Live) or Web-facilitated Self-study Training (Web). Pre-training, post-training, and one-month follow-up assessments evaluated training feasibility, acceptability, and impact (knowledge, and skill via simulation). Moderator (previous experience) and predictor (content engagement) analyses were conducted. Results Nearly-all (98%) participants completed assessments. Both trainings were acceptable, with higher ratings for Live and participants with previous experience (p’s<.05). Knowledge and skill improved from pre-training to post-training and follow-up in both conditions (p’s<.001). Live demonstrated greater content engagement (p’s<.01). Conclusions The training package was feasible, acceptable, and efficacious among nursing students. Given that Live had higher acceptability and engagement, and online training offers greater scalability, integrating interactive Live components within Web-based training may optimize outcomes, which may enhance practitioners’ delivery of pediatric obesity services. PMID:26873293

  19. Optimization of Training Sets for Neural-Net Processing of Characteristic Patterns from Vibrating Solids

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2001-01-01

    Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.

  20. The effectiveness of different methods of toilet training for bowel and bladder control.

    PubMed Central

    Klassen, Terry P; Kiddoo, Darcie; Lang, Mia E; Friesen, Carol; Russell, Kelly; Spooner, Carol; Vandermeer, Ben

    2006-01-01

    OBJECTIVES The objectives of this report are to determine the following: (1) the effectiveness of the toilet training methods, (2) which factors modify the effectiveness of toilet training, (3) if the toilet training methods are risk factor for adverse outcomes, and (4) the optimal toilet training method for achieving bowel and bladder control among patients with special needs. DATA SOURCES MEDLINE, Ovid MEDLINE In-Process & Other Non-Indexed Citations, Ovid OLDMEDLINE, Cochrane Central Register of Controlled Trials, EMBASE, CINAHL, PsycINFO, ERIC, EBM Reviews, HealthSTAR, AMED, Web of Science, Biological Abstracts, Sociological Abstracts, OCLC ProceedingsFirst, OCLC PapersFirst, Dissertation Abstracts, Index to Theses, National Research Register's Projects Database, and trials registers. REVIEW METHODS Two reviewers assessed the studies for inclusion. Studies were included if they met the following criteria: STUDY DESIGN RCT, CCT, prospective or retrospective cohort, case-control, cross-sectional or case-series; POPULATION infants, toddlers, or children with or without co-morbidities, neuromuscular, cognitive, or behavioral handicaps disabilities; INTERVENTION at least one toilet training method; and OUTCOME bladder and/or bowel control, successes, failures, adverse outcomes. Methodological quality was assessed independently by two reviewers. Data were extracted by one reviewer and a second checked for accuracy and completeness. Due to substantial heterogeneity, meta-analysis was not possible. RESULTS Twenty-six observational studies and eight controlled trials were included. Approximately half of the studies examined healthy children while the remaining studies assessed toilet training of mentally or physically handicapped children. For healthy children, the Azrin and Foxx method performed better than the Spock method, while child-oriented combined with negative term avoidance proved better than without. For mentally handicapped children, individual training was superior to group methods; relaxation techniques proved more efficacious than standard methods; operant conditioning was better than conventional treatment, and the Azrin and Foxx and a behavior modification method fared better than no training. The child-oriented approach was not assessed among mentally handicapped children. For children with Hirschsprung's disease or anal atresia, a multi-disciplinary behavior treatment was more efficacious than no treatment. CONCLUSIONS Both the Azrin and Foxx method and the child-oriented approach resulted in quick, successful toilet training, but there was limited information about the sustainability of the training. The two methods were not directly compared, thus it is difficult to draw definitive conclusions regarding the superiority of one method over the other. In general, both programs may be used to teach toilet training to healthy children. The Azrin and Foxx method and operant conditioning methods were consistently effective for toilet training mentally handicapped children. Programs that were adapted to physically handicapped children also resulted in successful toilet training. A lack of data precluded conclusions regarding the development of adverse outcomes. PMID:17764212

  1. AUC-Maximized Deep Convolutional Neural Fields for Protein Sequence Labeling.

    PubMed

    Wang, Sheng; Sun, Siqi; Xu, Jinbo

    2016-09-01

    Deep Convolutional Neural Networks (DCNN) has shown excellent performance in a variety of machine learning tasks. This paper presents Deep Convolutional Neural Fields (DeepCNF), an integration of DCNN with Conditional Random Field (CRF), for sequence labeling with an imbalanced label distribution. The widely-used training methods, such as maximum-likelihood and maximum labelwise accuracy, do not work well on imbalanced data. To handle this, we present a new training algorithm called maximum-AUC for DeepCNF. That is, we train DeepCNF by directly maximizing the empirical Area Under the ROC Curve (AUC), which is an unbiased measurement for imbalanced data. To fulfill this, we formulate AUC in a pairwise ranking framework, approximate it by a polynomial function and then apply a gradient-based procedure to optimize it. Our experimental results confirm that maximum-AUC greatly outperforms the other two training methods on 8-state secondary structure prediction and disorder prediction since their label distributions are highly imbalanced and also has similar performance as the other two training methods on solvent accessibility prediction, which has three equally-distributed labels. Furthermore, our experimental results show that our AUC-trained DeepCNF models greatly outperform existing popular predictors of these three tasks. The data and software related to this paper are available at https://github.com/realbigws/DeepCNF_AUC.

  2. AUC-Maximized Deep Convolutional Neural Fields for Protein Sequence Labeling

    PubMed Central

    Wang, Sheng; Sun, Siqi

    2017-01-01

    Deep Convolutional Neural Networks (DCNN) has shown excellent performance in a variety of machine learning tasks. This paper presents Deep Convolutional Neural Fields (DeepCNF), an integration of DCNN with Conditional Random Field (CRF), for sequence labeling with an imbalanced label distribution. The widely-used training methods, such as maximum-likelihood and maximum labelwise accuracy, do not work well on imbalanced data. To handle this, we present a new training algorithm called maximum-AUC for DeepCNF. That is, we train DeepCNF by directly maximizing the empirical Area Under the ROC Curve (AUC), which is an unbiased measurement for imbalanced data. To fulfill this, we formulate AUC in a pairwise ranking framework, approximate it by a polynomial function and then apply a gradient-based procedure to optimize it. Our experimental results confirm that maximum-AUC greatly outperforms the other two training methods on 8-state secondary structure prediction and disorder prediction since their label distributions are highly imbalanced and also has similar performance as the other two training methods on solvent accessibility prediction, which has three equally-distributed labels. Furthermore, our experimental results show that our AUC-trained DeepCNF models greatly outperform existing popular predictors of these three tasks. The data and software related to this paper are available at https://github.com/realbigws/DeepCNF_AUC. PMID:28884168

  3. A neural network construction method for surrogate modeling of physics-based analysis

    NASA Astrophysics Data System (ADS)

    Sung, Woong Je

    In this thesis existing methodologies related to the developmental methods of neural networks have been surveyed and their approaches to network sizing and structuring are carefully observed. This literature review covers the constructive methods, the pruning methods, and the evolutionary methods and questions about the basic assumption intrinsic to the conventional neural network learning paradigm, which is primarily devoted to optimization of connection weights (or synaptic strengths) for the pre-determined connection structure of the network. The main research hypothesis governing this thesis is that, without breaking a prevailing dichotomy between weights and connectivity of the network during learning phase, the efficient design of a task-specific neural network is hard to achieve because, as long as connectivity and weights are searched by separate means, a structural optimization of the neural network requires either repetitive re-training procedures or computationally expensive topological meta-search cycles. The main contribution of this thesis is designing and testing a novel learning mechanism which efficiently learns not only weight parameters but also connection structure from a given training data set, and positioning this learning mechanism within the surrogate modeling practice. In this work, a simple and straightforward extension to the conventional error Back-Propagation (BP) algorithm has been formulated to enable a simultaneous learning for both connectivity and weights of the Generalized Multilayer Perceptron (GMLP) in supervised learning tasks. A particular objective is to achieve a task-specific network having reasonable generalization performance with a minimal training time. The dichotomy between architectural design and weight optimization is reconciled by a mechanism establishing a new connection for a neuron pair which has potentially higher error-gradient than one of the existing connections. Interpreting an instance of the absence of connection as a zero-weight connection, the potential contribution to training error reduction of any present or absent connection can readily be evaluated using the BP algorithm. Instead of being broken, the connections that contribute less remain frozen with constant weight values optimized to that point but they are excluded from further weight optimization until reselected. In this way, a selective weight optimization is executed only for the dynamically maintained pool of high gradient connections. By searching the rapidly changing weights and concentrating optimization resources on them, the learning process is accelerated without either a significant increase in computational cost or a need for re-training. This results in a more task-adapted network connection structure. Combined with another important criterion for the division of a neuron which adds a new computational unit to a network, a highly fitted network can be grown out of the minimal random structure. This particular learning strategy can belong to a more broad class of the variable connectivity learning scheme and the devised algorithm has been named Optimal Brain Growth (OBG). The OBG algorithm has been tested on two canonical problems; a regression analysis using the Complicated Interaction Regression Function and a classification of the Two-Spiral Problem. A comparative study with conventional Multilayer Perceptrons (MLPs) consisting of single- and double-hidden layers shows that OBG is less sensitive to random initial conditions and generalizes better with only a minimal increase in computational time. This partially proves that a variable connectivity learning scheme has great potential to enhance computational efficiency and reduce efforts to select proper network architecture. To investigate the applicability of the OBG to more practical surrogate modeling tasks, the geometry-to-pressure mapping of a particular class of airfoils in the transonic flow regime has been sought using both the conventional MLP networks with pre-defined architecture and the OBG-developed networks started from the same initial MLP networks. Considering wide variety in airfoil geometry and diversity of flow conditions distributed over a range of flow Mach numbers and angles of attack, the new method shows a great potential to capture fundamentally nonlinear flow phenomena especially related to the occurrence of shock waves on airfoil surfaces in transonic flow regime. (Abstract shortened by UMI.).

  4. Initial Effects of Heavy Vehicle Trafficking on Vegetated Soils

    DTIC Science & Technology

    2012-08-01

    ER D C/ CR R EL T R -1 2 -6 Optimal Allocation of Land for Training and Non-training Uses ( OPAL ) Initial Effects of Heavy Vehicle...the outdoor loam test section. Optimal Allocation of Land for Training and Non-training Uses ( OPAL ) ERDC/CRREL TR-12-6 August 2012 Initial...mal Allocation of Land for Training and Non-Training Uses ( OPAL ) Pro- gram. The work was conducted by Nicole Buck and Sally Shoop of the Force

  5. Multi-spectral brain tissue segmentation using automatically trained k-Nearest-Neighbor classification.

    PubMed

    Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J

    2007-08-01

    Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.

  6. The Basic Organizing/Optimizing Training Scheduler (BOOTS): User's Guide. Technical Report 151.

    ERIC Educational Resources Information Center

    Church, Richard L.; Keeler, F. Laurence

    This report provides the step-by-step instructions required for using the Navy's Basic Organizing/Optimizing Training Scheduler (BOOTS) system. BOOTS is a computerized tool designed to aid in the creation of master training schedules for each Navy recruit training command. The system is defined in terms of three major functions: (1) data file…

  7. Optimization of metformin HCl 500 mg sustained release matrix tablets using Artificial Neural Network (ANN) based on Multilayer Perceptrons (MLP) model.

    PubMed

    Mandal, Uttam; Gowda, Veeran; Ghosh, Animesh; Bose, Anirbandeep; Bhaumik, Uttam; Chatterjee, Bappaditya; Pal, Tapan Kumar

    2008-02-01

    The aim of the present study was to apply the simultaneous optimization method incorporating Artificial Neural Network (ANN) using Multi-layer Perceptron (MLP) model to the development of a metformin HCl 500 mg sustained release matrix tablets with an optimized in vitro release profile. The amounts of HPMC K15M and PVP K30 at three levels (-1, 0, +1) for each were selected as casual factors. In vitro dissolution time profiles at four different sampling times (1 h, 2 h, 4 h and 8 h) were chosen as output variables. 13 kinds of metformin matrix tablets were prepared according to a 2(3) factorial design (central composite) with five extra center points, and their dissolution tests were performed. Commercially available STATISTICA Neural Network software (Stat Soft, Inc., Tulsa, OK, U.S.A.) was used throughout the study. The training process of MLP was completed until a satisfactory value of root square mean (RSM) for the test data was obtained using feed forward back propagation method. The root mean square value for the trained network was 0.000097, which indicated that the optimal MLP model was reached. The optimal tablet formulation based on some predetermined release criteria predicted by MLP was 336 mg of HPMC K15M and 130 mg of PVP K30. Calculated difference (f(1) 2.19) and similarity (f(2) 89.79) factors indicated that there was no difference between predicted and experimentally observed drug release profiles for the optimal formulation. This work illustrates the potential for an artificial neural network with MLP, to assist in development of sustained release dosage forms.

  8. The effectiveness of different methods of toilet training for bowel and bladder control.

    PubMed

    Klassen, Terry P; Kiddoo, Darcie; Lang, Mia E; Friesen, Carol; Russell, Kelly; Spooner, Carol; Vandermeer, Ben

    2006-12-01

    The objectives of this report are to determine the following: (1) the effectiveness of the toilet training methods, (2) which factors modify the effectiveness of toilet training, (3) if the toilet training methods are risk factor for adverse outcomes, and (4) the optimal toilet training method for achieving bowel and bladder control among patients with special needs. MEDLINE, Ovid MEDLINE In-Process & Other Non-Indexed Citations, Ovid OLDMEDLINE, Cochrane Central Register of Controlled Trials, EMBASE, CINAHL, PsycINFO, ERIC, EBM Reviews, HealthSTAR, AMED, Web of Science, Biological Abstracts, Sociological Abstracts, OCLC ProceedingsFirst, OCLC PapersFirst, Dissertation Abstracts, Index to Theses, National Research Register's Projects Database, and trials registers. Two reviewers assessed the studies for inclusion. Studies were included if they met the following criteria: RCT, CCT, prospective or retrospective cohort, case-control, cross-sectional or case-series; infants, toddlers, or children with or without co-morbidities, neuromuscular, cognitive, or behavioral handicaps disabilities; at least one toilet training method; and bladder and/or bowel control, successes, failures, adverse outcomes. Methodological quality was assessed independently by two reviewers. Data were extracted by one reviewer and a second checked for accuracy and completeness. Due to substantial heterogeneity, meta-analysis was not possible. Twenty-six observational studies and eight controlled trials were included. Approximately half of the studies examined healthy children while the remaining studies assessed toilet training of mentally or physically handicapped children. For healthy children, the Azrin and Foxx method performed better than the Spock method, while child-oriented combined with negative term avoidance proved better than without. For mentally handicapped children, individual training was superior to group methods; relaxation techniques proved more efficacious than standard methods; operant conditioning was better than conventional treatment, and the Azrin and Foxx and a behavior modification method fared better than no training. The child-oriented approach was not assessed among mentally handicapped children. For children with Hirschsprung's disease or anal atresia, a multi-disciplinary behavior treatment was more efficacious than no treatment. Both the Azrin and Foxx method and the child-oriented approach resulted in quick, successful toilet training, but there was limited information about the sustainability of the training. The two methods were not directly compared, thus it is difficult to draw definitive conclusions regarding the superiority of one method over the other. In general, both programs may be used to teach toilet training to healthy children. The Azrin and Foxx method and operant conditioning methods were consistently effective for toilet training mentally handicapped children. Programs that were adapted to physically handicapped children also resulted in successful toilet training. A lack of data precluded conclusions regarding the development of adverse outcomes.

  9. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  10. Removing impulse bursts from images by training-based soft morphological filtering

    NASA Astrophysics Data System (ADS)

    Koivisto, Pertti T.; Astola, Jaakko T.; Lukin, Vladimir V.; Melnik, Vladimir P.; Tsymbal, Oleg V.

    2001-08-01

    The characteristics of impulse bursts in radar images are analyzed and a model for this noise is proposed. The model also takes into consideration the multiplicative noise present in radar images. As a case study, soft morphological filters utilizing a training-based optimization scheme are used for the noise removal. Different approaches for the training are discussed. It is shown that the methods used can provide an effective removal of impulse bursts. At the same time the multiplicative noise in images is also suppressed together with god edge and detail preservation. Numerical simulation results as well as examples of real radar images are presented.

  11. Vibrotactile sensory substitution for object manipulation: amplitude versus pulse train frequency modulation.

    PubMed

    Stepp, Cara E; Matsuoka, Yoky

    2012-01-01

    Incorporating sensory feedback with prosthetic devices is now possible, but the optimal methods of providing such feedback are still unknown. The relative utility of amplitude and pulse train frequency modulated stimulation paradigms for providing vibrotactile feedback for object manipulation was assessed in 10 participants. The two approaches were studied during virtual object manipulation using a robotic interface as a function of presentation order and a simultaneous cognitive load. Despite the potential pragmatic benefits associated with pulse train frequency modulated vibrotactile stimulation, comparison of the approach with amplitude modulation indicates that amplitude modulation vibrotactile stimulation provides superior feedback for object manipulation.

  12. Safety and improvement of movement function after stroke with atomoxetine: A pilot randomized trial

    PubMed Central

    Ward, Andrea; Carrico, Cheryl; Powell, Elizabeth; Westgate, Philip M.; Nichols, Laurie; Fleischer, Anne; Sawaki, Lumy

    2016-01-01

    Background: Intensive, task-oriented motor training has been associated with neuroplastic reorganization and improved upper extremity movement function after stroke. However, to optimize such training for people with moderate-to-severe movement impairment, pharmacological modulation of neuroplasticity may be needed as an adjuvant intervention. Objective: Evaluate safety, as well as improvement in movement function, associated with motor training paired with a drug to upregulate neuroplasticity after stroke. Methods: In this double-blind, randomized, placebo-controlled study, 12 subjects with chronic stroke received either atomoxetine or placebo paired with motor training. Safety was assessed using vital signs. Upper extremity movement function was assessed using Fugl-Meyer Assessment, Wolf Motor Function Test, and Action Research Arm Test at baseline, post-intervention, and 1-month follow-up. Results: No significant between-groups differences were found in mean heart rate (95% CI, –12.4–22.6; p = 0.23), mean systolic blood pressure (95% CI, –1.7–29.6; p = 0.21), or mean diastolic blood pressure (95% CI, –10.4–13.3; p = 0.08). A statistically significant between-groups difference on Fugl-Meyer at post-intervention favored the atomoxetine group (95% CI, 1.6–12.7; p = 0.016). Conclusion: Atomoxetine combined with motor training appears safe and may optimize motor training outcomes after stroke. PMID:27858723

  13. Choosing non-redundant representative subsets of protein sequence data sets using submodular optimization.

    PubMed

    Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford

    2018-04-01

    Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.

  14. Improved artificial neural networks in prediction of malignancy of lesions in contrast-enhanced MR-mammography.

    PubMed

    Vomweg, T W; Buscema, M; Kauczor, H U; Teifke, A; Intraligi, M; Terzi, S; Heussel, C P; Achenbach, T; Rieker, O; Mayer, D; Thelen, M

    2003-09-01

    The aim of this study was to evaluate the capability of improved artificial neural networks (ANN) and additional novel training methods in distinguishing between benign and malignant breast lesions in contrast-enhanced magnetic resonance-mammography (MRM). A total of 604 histologically proven cases of contrast-enhanced lesions of the female breast at MRI were analyzed. Morphological, dynamic and clinical parameters were collected and stored in a database. The data set was divided into several groups using random or experimental methods [Training & Testing (T&T) algorithm] to train and test different ANNs. An additional novel computer program for input variable selection was applied. Sensitivity and specificity were calculated and compared with a statistical method and an expert radiologist. After optimization of the distribution of cases among the training and testing sets by the T & T algorithm and the reduction of input variables by the Input Selection procedure a highly sophisticated ANN achieved a sensitivity of 93.6% and a specificity of 91.9% in predicting malignancy of lesions within an independent prediction sample set. The best statistical method reached a sensitivity of 90.5% and a specificity of 68.9%. An expert radiologist performed better than the statistical method but worse than the ANN (sensitivity 92.1%, specificity 85.6%). Features extracted out of dynamic contrast-enhanced MRM and additional clinical data can be successfully analyzed by advanced ANNs. The quality of the resulting network strongly depends on the training methods, which are improved by the use of novel training tools. The best results of an improved ANN outperform expert radiologists.

  15. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  16. Tensor-based Dictionary Learning for Spectral CT Reconstruction

    PubMed Central

    Zhang, Yanbo; Wang, Ge

    2016-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628

  17. Assessment of COTS IR image simulation tools for ATR development

    NASA Astrophysics Data System (ADS)

    Seidel, Heiko; Stahl, Christoph; Bjerkeli, Frode; Skaaren-Fystro, Paal

    2005-05-01

    Following the tendency of increased use of imaging sensors in military aircraft, future fighter pilots will need onboard artificial intelligence e.g. ATR for aiding them in image interpretation and target designation. The European Aeronautic Defence and Space Company (EADS) in Germany has developed an advanced method for automatic target recognition (ATR) which is based on adaptive neural networks. This ATR method can assist the crew of military aircraft like the Eurofighter in sensor image monitoring and thereby reduce the workload in the cockpit and increase the mission efficiency. The EADS ATR approach can be adapted for imagery of visual, infrared and SAR sensors because of the training-based classifiers of the ATR method. For the optimal adaptation of these classifiers they have to be trained with appropriate and sufficient image data. The training images must show the target objects from different aspect angles, ranges, environmental conditions, etc. Incomplete training sets lead to a degradation of classifier performance. Additionally, ground truth information i.e. scenario conditions like class type and position of targets is necessary for the optimal adaptation of the ATR method. In Summer 2003, EADS started a cooperation with Kongsberg Defence & Aerospace (KDA) from Norway. The EADS/KDA approach is to provide additional image data sets for training-based ATR through IR image simulation. The joint study aims to investigate the benefits of enhancing incomplete training sets for classifier adaptation by simulated synthetic imagery. EADS/KDA identified the requirements of a commercial-off-the-shelf IR simulation tool capable of delivering appropriate synthetic imagery for ATR development. A market study of available IR simulation tools and suppliers was performed. After that the most promising tool was benchmarked according to several criteria e.g. thermal emission model, sensor model, targets model, non-radiometric image features etc., resulting in a recommendation. The synthetic image data that are used for the investigation are generated using the recommended tool. Within the scope of this study, ATR performance on IR imagery using classifiers trained on real, synthetic and mixed image sets was evaluated. The performance of the adapted classifiers is assessed using recorded IR imagery with known ground-truth and recommendations are given for the use of COTS IR image simulation tools for ATR development.

  18. Using non-invasive brain stimulation to augment motor training-induced plasticity

    PubMed Central

    Bolognini, Nadia; Pascual-Leone, Alvaro; Fregni, Felipe

    2009-01-01

    Therapies for motor recovery after stroke or traumatic brain injury are still not satisfactory. To date the best approach seems to be the intensive physical therapy. However the results are limited and functional gains are often minimal. The goal of motor training is to minimize functional disability and optimize functional motor recovery. This is thought to be achieved by modulation of plastic changes in the brain. Therefore, adjunct interventions that can augment the response of the motor system to the behavioural training might be useful to enhance the therapy-induced recovery in neurological populations. In this context, noninvasive brain stimulation appears to be an interesting option as an add-on intervention to standard physical therapies. Two non-invasive methods of inducing electrical currents into the brain have proved to be promising for inducing long-lasting plastic changes in motor systems: transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS). These techniques represent powerful methods for priming cortical excitability for a subsequent motor task, demand, or stimulation. Thus, their mutual use can optimize the plastic changes induced by motor practice, leading to more remarkable and outlasting clinical gains in rehabilitation. In this review we discuss how these techniques can enhance the effects of a behavioural intervention and the clinical evidence to date. PMID:19292910

  19. Validity of the Talk Test for exercise prescription after myocardial revascularization.

    PubMed

    Zanettini, Renzo; Centeleghe, Paola; Franzelli, Cristina; Mori, Ileana; Benna, Stefania; Penati, Chiara; Sorlini, Nadia

    2013-04-01

    For exercise prescription, rating of perceived exertion is the subjective tool most frequently used in addition to methods based on percentage of peak exercise variables. The aim of this study was the validation of a subjective method widely called the Talk Test (TT) for optimization of training intensity in patients with recent myocardial revascularization. Fifty patients with recent myocardial revascularization (17 by coronary artery bypass grafting and 33 by percutaneous coronary intervention) were enrolled in a cardiac rehabilitation programme. Each patient underwent three repetitions of the TT during three different exercise sessions to evaluate the within-patient and between-operators reliability in assessing the workload (WL) at TT thresholds. These parameters were then compared with the data of a final cardiopulmonary exercise testing, and the WL range between the individual aerobic threshold (AeT) and anaerobic threshold (AnT) was considered as the optimal training zone. The within-patient and between-operators reliability in assessing TT thresholds were satisfactory. No significant differences were found between patients' and physiotherapists' evaluations of WL at different TT thresholds. WL at Last TT+ was between AeT and AnT in 88% of patients and slightly

  20. Effectiveness of Training Sessions on a Measure of Optimism and Pessimism Concepts among the Kindergarten Children in the District of Al-Shobak in Jordan

    ERIC Educational Resources Information Center

    Al-Mohtadi, Reham Mohammad; ALdarab'h, Intisar Turki; Gasaymeh, Al-Mothana Moustafa

    2015-01-01

    The current study aimed to examine the effects of training sessions on children's levels of optimism versus pessimism among the kindergarten children in the district of Shobak in Jordan. The sample of the study consisted 21 children whom their ages were between 5 to 6 years old. A training program was applied. The level of optimism and pessimism…

  1. Optimizing Web-Based Instruction: A Case Study Using Poultry Processing Unit Operations

    ERIC Educational Resources Information Center

    O' Bryan, Corliss A.; Crandall, Philip G.; Shores-Ellis, Katrina; Johnson, Donald M.; Ricke, Steven C.; Marcy, John

    2009-01-01

    Food companies and supporting industries need inexpensive, revisable training methods for large numbers of hourly employees due to continuing improvements in Hazard Analysis Critical Control Point (HACCP) programs, new processing equipment, and high employee turnover. HACCP-based food safety programs have demonstrated their value by reducing the…

  2. OPAL Land Condition Model

    DTIC Science & Technology

    2014-08-01

    ER D C/ CE RL S R- 14 -7 Optimal Allocation of Land for Training and Non-training Uses OPAL Land Condition Model Co ns tr uc tio n En...Optimal Allocation of Land for Training and Non-training Uses ERDC/CERL SR-14-7 August 2014 OPAL Land Condition Model Daniel Koch, Scott Tweddale...programmer information supporting the Op- timal Programming of Army Lands ( OPAL ) model, which was designed for use by trainers, Integrated Training

  3. Training a whole-book LSTM-based recognizer with an optimal training set

    NASA Astrophysics Data System (ADS)

    Soheili, Mohammad Reza; Yousefi, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2018-04-01

    Despite the recent progress in OCR technologies, whole-book recognition, is still a challenging task, in particular in case of old and historical books, that the unknown font faces or low quality of paper and print contributes to the challenge. Therefore, pre-trained recognizers and generic methods do not usually perform up to required standards, and usually the performance degrades for larger scale recognition tasks, such as of a book. Such reportedly low error-rate methods turn out to require a great deal of manual correction. Generally, such methodologies do not make effective use of concepts such redundancy in whole-book recognition. In this work, we propose to train Long Short Term Memory (LSTM) networks on a minimal training set obtained from the book to be recognized. We show that clustering all the sub-words in the book, and using the sub-word cluster centers as the training set for the LSTM network, we can train models that outperform any identical network that is trained with randomly selected pages of the book. In our experiments, we also show that although the sub-word cluster centers are equivalent to about 8 pages of text for a 101- page book, a LSTM network trained on such a set performs competitively compared to an identical network that is trained on a set of 60 randomly selected pages of the book.

  4. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method

    PubMed Central

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-01-01

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206

  5. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.

    PubMed

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-06-08

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.

  6. Development of a Standardized Cranial Phantom for Training and Optimization of Functional Stereotactic Operations.

    PubMed

    Krüger, Marie T; Coenen, Volker A; Egger, Karl; Shah, Mukesch; Reinacher, Peter C

    2018-06-13

    In recent years, simulations based on phantom models have become increasingly popular in the medical field. In the field of functional and stereotactic neurosurgery, a cranial phantom would be useful to train operative techniques, such as stereo-electroencephalography (SEEG), to establish new methods as well as to develop and modify radiological techniques. In this study, we describe the construction of a cranial phantom and show examples for it in stereotactic and functional neurosurgery and its applicability with different radiological modalities. We prepared a plaster skull filled with agar. A complete operation for deep brain stimulation (DBS) was simulated using directional leads. Moreover, a complete SEEG operation including planning, implantation of the electrodes, and intraoperative and postoperative imaging was simulated. An optimally customized cranial phantom is filled with 10% agar. At 7°C, it can be stored for approximately 4 months. A DBS and an SEEG procedure could be realistically simulated. Lead artifacts can be studied in CT, X-ray, rotational fluoroscopy, and MRI. This cranial phantom is a simple and effective model to simulate functional and stereotactic neurosurgical operations. This might be useful for teaching and training of neurosurgeons, establishing operations in a new center and for optimization of radiological examinations. © 2018 S. Karger AG, Basel.

  7. A Novel Calibration-Minimum Method for Prediction of Mole Fraction in Non-Ideal Mixture.

    PubMed

    Shibayama, Shojiro; Kaneko, Hiromasa; Funatsu, Kimito

    2017-04-01

    This article proposes a novel concentration prediction model that requires little training data and is useful for rapid process understanding. Process analytical technology is currently popular, especially in the pharmaceutical industry, for enhancement of process understanding and process control. A calibration-free method, iterative optimization technology (IOT), was proposed to predict pure component concentrations, because calibration methods such as partial least squares, require a large number of training samples, leading to high costs. However, IOT cannot be applied to concentration prediction in non-ideal mixtures because its basic equation is derived from the Beer-Lambert law, which cannot be applied to non-ideal mixtures. We proposed a novel method that realizes prediction of pure component concentrations in mixtures from a small number of training samples, assuming that spectral changes arising from molecular interactions can be expressed as a function of concentration. The proposed method is named IOT with virtual molecular interaction spectra (IOT-VIS) because the method takes spectral change as a virtual spectrum x nonlin,i into account. It was confirmed through the two case studies that the predictive accuracy of IOT-VIS was the highest among existing IOT methods.

  8. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    PubMed

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-06-12

    Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.

  9. False alarm reduction by the And-ing of multiple multivariate Gaussian classifiers

    NASA Astrophysics Data System (ADS)

    Dobeck, Gerald J.; Cobb, J. Tory

    2003-09-01

    The high-resolution sonar is one of the principal sensors used by the Navy to detect and classify sea mines in minehunting operations. For such sonar systems, substantial effort has been devoted to the development of automated detection and classification (D/C) algorithms. These have been spurred by several factors including (1) aids for operators to reduce work overload, (2) more optimal use of all available data, and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and man-made clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while still maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms have been studied. We refer to this as Algorithm Fusion. The results have been remarkable, including reliable robustness to new environments. This paper describes a method for training several multivariate Gaussian classifiers such that their And-ing dramatically reduces false alarms while maintaining a high probability of classification. This training approach is referred to as the Focused- Training method. This work extends our 2001-2002 work where the Focused-Training method was used with three other types of classifiers: the Attractor-based K-Nearest Neighbor Neural Network (a type of radial-basis, probabilistic neural network), the Optimal Discrimination Filter Classifier (based linear discrimination theory), and the Quadratic Penalty Function Support Vector Machine (QPFSVM). Although our experience has been gained in the area of sea mine detection and classification, the principles described herein are general and can be applied to a wide range of pattern recognition and automatic target recognition (ATR) problems.

  10. Intelligent design optimization of a shape-memory-alloy-actuated reconfigurable wing

    NASA Astrophysics Data System (ADS)

    Lagoudas, Dimitris C.; Strelec, Justin K.; Yen, John; Khan, Mohammad A.

    2000-06-01

    The unique thermal and mechanical properties offered by shape memory alloys (SMAs) present exciting possibilities in the field of aerospace engineering. When properly trained, SMA wires act as linear actuators by contracting when heated and returning to their original shape when cooled. It has been shown experimentally that the overall shape of an airfoil can be altered by activating several attached SMA wire actuators. This shape-change can effectively increase the efficiency of a wing in flight at several different flow regimes. To determine the necessary placement of these wire actuators within the wing, an optimization method that incorporates a fully-coupled structural, thermal, and aerodynamic analysis has been utilized. Due to the complexity of the fully-coupled analysis, intelligent optimization methods such as genetic algorithms have been used to efficiently converge to an optimal solution. The genetic algorithm used in this case is a hybrid version with global search and optimization capabilities augmented by the simplex method as a local search technique. For the reconfigurable wing, each chromosome represents a realizable airfoil configuration and its genes are the SMA actuators, described by their location and maximum transformation strain. The genetic algorithm has been used to optimize this design problem to maximize the lift-to-drag ratio for a reconfigured airfoil shape.

  11. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  12. Clinical simulation training improves the clinical performance of Chinese medical students

    PubMed Central

    Zhang, Ming-ya; Cheng, Xin; Xu, An-ding; Luo, Liang-ping; Yang, Xuesong

    2015-01-01

    Background Modern medical education promotes medical students’ clinical operating capacity rather than the mastery of theoretical knowledge. To accomplish this objective, clinical skill training using various simulations was introduced into medical education to cultivate creativity and develop the practical ability of students. However, quantitative analysis of the efficiency of clinical skill training with simulations is lacking. Methods In the present study, we compared the mean scores of medical students (Jinan University) who graduated in 2013 and 2014 on 16 stations between traditional training (control) and simulative training groups. In addition, in a clinical skill competition, the objective structured clinical examination (OSCE) scores of participating medical students trained using traditional and simulative training were compared. The data were statistically analyzed and qualitatively described. Results The results revealed that simulative training could significantly enhance the graduate score of medical students compared with the control. The OSCE scores of participating medical students in the clinical skill competition, trained using simulations, were dramatically higher than those of students trained through traditional methods, and we also observed that the OSCE marks were significantly increased for the same participant after simulative training for the clinical skill competition. Conclusions Taken together, these data indicate that clinical skill training with a variety of simulations could substantially promote the clinical performance of medical students and optimize the resources used for medical education, although a precise analysis of each specialization is needed in the future. PMID:26478142

  13. Comparative Analysis of Neural Network Training Methods in Real-time Radiotherapy.

    PubMed

    Nouri, S; Hosseini Pooya, S M; Soltani Nabipour, J

    2017-03-01

    The motions of body and tumor in some regions such as chest during radiotherapy treatments are one of the major concerns protecting normal tissues against high doses. By using real-time radiotherapy technique, it is possible to increase the accuracy of delivered dose to the tumor region by means of tracing markers on the body of patients. This study evaluates the accuracy of some artificial intelligence methods including neural network and those of combination with genetic algorithm as well as particle swarm optimization (PSO) estimating tumor positions in real-time radiotherapy. One hundred recorded signals of three external markers were used as input data. The signals from 3 markers thorough 10 breathing cycles of a patient treated via a cyber-knife for a lung tumor were used as data input. Then, neural network method and its combination with genetic or PSO algorithms were applied determining the tumor locations using MATLAB© software program. The accuracies were obtained 0.8%, 12% and 14% in neural network, genetic and particle swarm optimization algorithms, respectively. The internal target volume (ITV) should be determined based on the applied neural network algorithm on training steps.

  14. Visualizing deep neural network by alternately image blurring and deblurring.

    PubMed

    Wang, Feng; Liu, Haijun; Cheng, Jian

    2018-01-01

    Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Optimization of training backpropagation algorithm using nguyen widrow for angina ludwig diagnosis

    NASA Astrophysics Data System (ADS)

    Aisyah, Siti; Harahap, Mawaddah; Mahmud Husein Siregar, Amir; Turnip, Mardi

    2018-04-01

    Tooth and mouth disease is a common disease, with a prevalence of more than 40% (children aged less than 7 years) in milk teeth and about 85% (adults aged 17 years and over) on permanent teeth. Angina Ludwig is one of mouth disease type that occurs due to infection of the tooth root and trauma of the mouth. ‘In this study back propagation algorithm applied to diagnose AnginaLudwig disease (using Nguyen Widrow method in optimization of training time). From the experimental results, it is known that the average BPNN by using Nguyen Widrow is much faster which is about 0.0624 seconds and 0.1019 seconds (without NguyenWidrow). In contrast, for pattern recognition needs, found that back propagation without Nguyen Widrow is much better that is with 90% accuracy (only 70% with NguyenWidrow).

  16. Modeling landslide susceptibility in data-scarce environments using optimized data mining and statistical methods

    NASA Astrophysics Data System (ADS)

    Lee, Jung-Hyun; Sameen, Maher Ibrahim; Pradhan, Biswajeet; Park, Hyuck-Jin

    2018-02-01

    This study evaluated the generalizability of five models to select a suitable approach for landslide susceptibility modeling in data-scarce environments. In total, 418 landslide inventories and 18 landslide conditioning factors were analyzed. Multicollinearity and factor optimization were investigated before data modeling, and two experiments were then conducted. In each experiment, five susceptibility maps were produced based on support vector machine (SVM), random forest (RF), weight-of-evidence (WoE), ridge regression (Rid_R), and robust regression (RR) models. The highest accuracy (AUC = 0.85) was achieved with the SVM model when either the full or limited landslide inventories were used. Furthermore, the RF and WoE models were severely affected when less landslide samples were used for training. The other models were affected slightly when the training samples were limited.

  17. System and Method for Modeling the Flow Performance Features of an Object

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles (Inventor); Ross, James (Inventor)

    1997-01-01

    The method and apparatus includes a neural network for generating a model of an object in a wind tunnel from performance data on the object. The network is trained from test input signals (e.g., leading edge flap position, trailing edge flap position, angle of attack, and other geometric configurations, and power settings) and test output signals (e.g., lift, drag, pitching moment, or other performance features). In one embodiment, the neural network training method employs a modified Levenberg-Marquardt optimization technique. The model can be generated 'real time' as wind tunnel testing proceeds. Once trained, the model is used to estimate performance features associated with the aircraft given geometric configuration and/or power setting input. The invention can also be applied in other similar static flow modeling applications in aerodynamics, hydrodynamics, fluid dynamics, and other such disciplines. For example, the static testing of cars, sails, and foils, propellers, keels, rudders, turbines, fins, and the like, in a wind tunnel, water trough, or other flowing medium.

  18. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. On the Optimum Architecture of the Biologically Inspired Hierarchical Temporal Memory Model Applied to the Hand-Written Digit Recognition

    NASA Astrophysics Data System (ADS)

    Štolc, Svorad; Bajla, Ivan

    2010-01-01

    In the paper we describe basic functions of the Hierarchical Temporal Memory (HTM) network based on a novel biologically inspired model of the large-scale structure of the mammalian neocortex. The focus of this paper is in a systematic exploration of possibilities how to optimize important controlling parameters of the HTM model applied to the classification of hand-written digits from the USPS database. The statistical properties of this database are analyzed using the permutation test which employs a randomization distribution of the training and testing data. Based on a notion of the homogeneous usage of input image pixels, a methodology of the HTM parameter optimization is proposed. In order to study effects of two substantial parameters of the architecture: the patch size and the overlap in more details, we have restricted ourselves to the single-level HTM networks. A novel method for construction of the training sequences by ordering series of the static images is developed. A novel method for estimation of the parameter maxDist based on the box counting method is proposed. The parameter sigma of the inference Gaussian is optimized on the basis of the maximization of the belief distribution entropy. Both optimization algorithms can be equally applied to the multi-level HTM networks as well. The influences of the parameters transitionMemory and requestedGroupCount on the HTM network performance have been explored. Altogether, we have investigated 2736 different HTM network configurations. The obtained classification accuracy results have been benchmarked with the published results of several conventional classifiers.

  20. The European Association of Preventive Cardiology Exercise Prescription in Everyday Practice and Rehabilitative Training (EXPERT) tool: A digital training and decision support system for optimized exercise prescription in cardiovascular disease. Concept, definitions and construction methodology.

    PubMed

    Hansen, Dominique; Dendale, Paul; Coninx, Karin; Vanhees, Luc; Piepoli, Massimo F; Niebauer, Josef; Cornelissen, Veronique; Pedretti, Roberto; Geurts, Eva; Ruiz, Gustavo R; Corrà, Ugo; Schmid, Jean-Paul; Greco, Eugenio; Davos, Constantinos H; Edelmann, Frank; Abreu, Ana; Rauch, Bernhard; Ambrosetti, Marco; Braga, Simona S; Barna, Olga; Beckers, Paul; Bussotti, Maurizio; Fagard, Robert; Faggiano, Pompilio; Garcia-Porrero, Esteban; Kouidi, Evangelia; Lamotte, Michel; Neunhäuserer, Daniel; Reibis, Rona; Spruit, Martijn A; Stettler, Christoph; Takken, Tim; Tonoli, Cajsa; Vigorito, Carlo; Völler, Heinz; Doherty, Patrick

    2017-07-01

    Background Exercise rehabilitation is highly recommended by current guidelines on prevention of cardiovascular disease, but its implementation is still poor. Many clinicians experience difficulties in prescribing exercise in the presence of different concomitant cardiovascular diseases and risk factors within the same patient. It was aimed to develop a digital training and decision support system for exercise prescription in cardiovascular disease patients in clinical practice: the European Association of Preventive Cardiology Exercise Prescription in Everyday Practice and Rehabilitative Training (EXPERT) tool. Methods EXPERT working group members were requested to define (a) diagnostic criteria for specific cardiovascular diseases, cardiovascular disease risk factors, and other chronic non-cardiovascular conditions, (b) primary goals of exercise intervention, (c) disease-specific prescription of exercise training (intensity, frequency, volume, type, session and programme duration), and (d) exercise training safety advices. The impact of exercise tolerance, common cardiovascular medications and adverse events during exercise testing were further taken into account for optimized exercise prescription. Results Exercise training recommendations and safety advices were formulated for 10 cardiovascular diseases, five cardiovascular disease risk factors (type 1 and 2 diabetes, obesity, hypertension, hypercholesterolaemia), and three common chronic non-cardiovascular conditions (lung and renal failure and sarcopaenia), but also accounted for baseline exercise tolerance, common cardiovascular medications and occurrence of adverse events during exercise testing. An algorithm, supported by an interactive tool, was constructed based on these data. This training and decision support system automatically provides an exercise prescription according to the variables provided. Conclusion This digital training and decision support system may contribute in overcoming barriers in exercise implementation in common cardiovascular diseases.

  1. Optimizing performance by improving core stability and core strength.

    PubMed

    Hibbs, Angela E; Thompson, Kevin G; French, Duncan; Wrigley, Allan; Spears, Iain

    2008-01-01

    Core stability and core strength have been subject to research since the early 1980s. Research has highlighted benefits of training these processes for people with back pain and for carrying out everyday activities. However, less research has been performed on the benefits of core training for elite athletes and how this training should be carried out to optimize sporting performance. Many elite athletes undertake core stability and core strength training as part of their training programme, despite contradictory findings and conclusions as to their efficacy. This is mainly due to the lack of a gold standard method for measuring core stability and strength when performing everyday tasks and sporting movements. A further confounding factor is that because of the differing demands on the core musculature during everyday activities (low load, slow movements) and sporting activities (high load, resisted, dynamic movements), research performed in the rehabilitation sector cannot be applied to the sporting environment and, subsequently, data regarding core training programmes and their effectiveness on sporting performance are lacking. There are many articles in the literature that promote core training programmes and exercises for performance enhancement without providing a strong scientific rationale of their effectiveness, especially in the sporting sector. In the rehabilitation sector, improvements in lower back injuries have been reported by improving core stability. Few studies have observed any performance enhancement in sporting activities despite observing improvements in core stability and core strength following a core training programme. A clearer understanding of the roles that specific muscles have during core stability and core strength exercises would enable more functional training programmes to be implemented, which may result in a more effective transfer of these skills to actual sporting activities.

  2. The Soldier-Athlete Initiative: Program Evaluation of the Effectiveness of Athletic Trainers Compared to Musculoskeletal Action Teams in Initial Entry Training, Fort Leonard Wood, June 2010 - December 2011

    DTIC Science & Technology

    2012-10-01

    skills and injury prevention methods applied by ATs in sports and exercise situations may also be applicable to recruits in IET. In late 2009, the...able to assure that physical training exercises are carried out in a manner to optimize mission readiness and minimize the incidence of injury. In...facility with a 21-piece Nautilus set and aerobic exercise equipment. Money spent on the equipment/facility was recouped within 10 months and there was

  3. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

  4. Joint learning of labels and distance metric.

    PubMed

    Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng

    2010-06-01

    Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness.

  5. Improvement of the System of Training of Specialists by University for Coal Mining Enterprises

    NASA Astrophysics Data System (ADS)

    Mikhalchenko, Vadim; Seredkina, Irina

    2017-11-01

    In the article the ingenious technique of the Quality Function Deployment with reference to the process of training of specialists with higher education by university is considered. The method is based on the step-by-step conversion of customer requirements into specific organizational, meaningful and functional transformations of the technological process of the university. A fully deployed quality function includes four stages of tracking customer requirements while creating a product: product planning and design, process design, production design. The Quality Function Deployment can be considered as one of the methods for optimizing the technological processes of training of specialists with higher education in the current economic conditions. Implemented at the initial stages of the life cycle of the technological process, it ensures not only the high quality of the "product" of graduate school, but also the fullest possible satisfaction of consumer's requests and expectations.

  6. Inline Measurement of Particle Concentrations in Multicomponent Suspensions using Ultrasonic Sensor and Least Squares Support Vector Machines.

    PubMed

    Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen

    2015-09-18

    This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.

  7. OPAL Netlogo Land Condition Model

    DTIC Science & Technology

    2014-08-15

    ER D C/ CE RL T R- 14 -1 2 Optimal Allocation of Land for Training and Non-training Uses ( OPAL ) OPAL Netlogo Land Condition Model...Fulton, Natalie Myers, Scott Tweddale, Dick Gebhart, Ryan Busby, Anne Dain-Owens, and Heidi Howard August 2014 OPAL team measuring above and...online library at http://acwc.sdp.sirsi.net/client/default. Optimal Allocation of Land for Training and Non-training Uses ( OPAL ) ERDC/CERL TR-14-12

  8. SU-F-E-09: Respiratory Signal Prediction Based On Multi-Layer Perceptron Neural Network Using Adjustable Training Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, W; Jiang, M; Yin, F

    Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, amore » Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results demonstrate that the ASMLP respiratory prediction method is more accurate than MLP method and can improve the respiration forecast accuracy.« less

  9. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  10. Efficient Computing Budget Allocation for Finding Simplest Good Designs

    PubMed Central

    Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung

    2012-01-01

    In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404

  11. Towards exaggerated emphysema stereotypes

    NASA Astrophysics Data System (ADS)

    Chen, C.; Sørensen, L.; Lauze, F.; Igel, C.; Loog, M.; Feragen, A.; de Bruijne, M.; Nielsen, M.

    2012-03-01

    Classification is widely used in the context of medical image analysis and in order to illustrate the mechanism of a classifier, we introduce the notion of an exaggerated image stereotype based on training data and trained classifier. The stereotype of some image class of interest should emphasize/exaggerate the characteristic patterns in an image class and visualize the information the employed classifier relies on. This is useful for gaining insight into the classification and serves for comparison with the biological models of disease. In this work, we build exaggerated image stereotypes by optimizing an objective function which consists of a discriminative term based on the classification accuracy, and a generative term based on the class distributions. A gradient descent method based on iterated conditional modes (ICM) is employed for optimization. We use this idea with Fisher's linear discriminant rule and assume a multivariate normal distribution for samples within a class. The proposed framework is applied to computed tomography (CT) images of lung tissue with emphysema. The synthesized stereotypes illustrate the exaggerated patterns of lung tissue with emphysema, which is underpinned by three different quantitative evaluation methods.

  12. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. PONS2train: tool for testing the MLP architecture and local traning methods for runoff forecast

    NASA Astrophysics Data System (ADS)

    Maca, P.; Pavlasek, J.; Pech, P.

    2012-04-01

    The purpose of presented poster is to introduce the PONS2train developed for runoff prediction via multilayer perceptron - MLP. The software application enables the implementation of 12 different MLP's transfer functions, comparison of 9 local training algorithms and finally the evaluation the MLP performance via 17 selected model evaluation metrics. The PONS2train software is written in C++ programing language. Its implementation consists of 4 classes. The NEURAL_NET and NEURON classes implement the MLP, the CRITERIA class estimates model evaluation metrics and for model performance evaluation via testing and validation datasets. The DATA_PATTERN class prepares the validation, testing and calibration datasets. The software application uses the LAPACK, BLAS and ARMADILLO C++ linear algebra libraries. The PONS2train implements the first order local optimization algorithms: standard on-line and batch back-propagation with learning rate combined with momentum and its variants with the regularization term, Rprop and standard batch back-propagation with variable momentum and learning rate. The second order local training algorithms represents: the Levenberg-Marquardt algorithm with and without regularization and four variants of scaled conjugate gradients. The other important PONS2train features are: the multi-run, the weight saturation control, early stopping of trainings, and the MLP weights analysis. The weights initialization is done via two different methods: random sampling from uniform distribution on open interval or Nguyen Widrow method. The data patterns can be transformed via linear and nonlinear transformation. The runoff forecast case study focuses on PONS2train implementation and shows the different aspects of the MLP training, the MLP architecture estimation, the neural network weights analysis and model uncertainty estimation.

  14. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology

    PubMed Central

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767

  15. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.

    PubMed

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.

  16. Optimal training for emergency needle thoracostomy placement by prehospital personnel: didactic teaching versus a cadaver-based training program.

    PubMed

    Grabo, Daniel; Inaba, Kenji; Hammer, Peter; Karamanos, Efstathios; Skiada, Dimitra; Martin, Matthew; Sullivan, Maura; Demetriades, Demetrios

    2014-09-01

    Tension pneumothorax can rapidly progress to cardiac arrest and death if not promptly recognized and appropriately treated. We sought to evaluate the effectiveness of traditional didactic slide-based lectures (SBLs) as compared with fresh tissue cadaver-based training (CBT) for placement of needle thoracostomy (NT). Forty randomly selected US Navy corpsmen were recruited to participate from incoming classes of the Navy Trauma Training Center at the LAC + USC Medical Center and were then randomized to one of two NT teaching methods. The following outcomes were compared between the two study arms: (1) time required to perform the procedure, (2) correct placement of the needle, and (3) magnitude of deviation from the correct position. During the study period, a total of 40 corpsmen were enrolled, 20 randomized to SBL and 20 to CBT arms. When outcomes were analyzed, time required to NT placement was not different between the two arms. Examination of the location of needle placement revealed marked differences between the two study groups. Only a minority of the SBL group (35%) placed the NT correctly in the second intercostal space. In comparison, the majority of corpsmen assigned to the CBT group demonstrated accurate placement in the second intercostal space (75%). In a CBT module, US Navy corpsmen were better trained to place NT accurately than their traditional didactic SBL counterparts. Further studies are indicated to identify the optimal components of effective simulation training for NT and other emergent interventions.

  17. Classification without labels: learning from mixed samples in high energy physics

    NASA Astrophysics Data System (ADS)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-01

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.

  18. Research on energy-saving optimal control of trains in a following operation under a fixed four-aspect autoblock system based on multi-dimension parallel GA

    NASA Astrophysics Data System (ADS)

    Lu, Qiheng; Feng, Xiaoyun

    2013-03-01

    After analyzing the working principle of the four-aspect fixed autoblock system, an energy-saving control model was created based on the dynamics equations of the trains in order to study the energy-saving optimal control strategy of trains in a following operation. Besides the safety and punctuality, the main aims of the model were the energy consumption and the time error. Based on this model, the static and dynamic speed restraints under a four-aspect fixed autoblock system were put forward. The multi-dimension parallel genetic algorithm (GA) and the external punishment function were adopted to solve this problem. By using the real number coding and the strategy of ramps divided into three parts, the convergence of GA was speeded up and the length of chromosomes was shortened. A vector of Gaussian random disturbance with zero mean was superposed to the mutation operator. The simulation result showed that the method could reduce the energy consumption effectively based on safety and punctuality.

  19. A Novel User Classification Method for Femtocell Network by Using Affinity Propagation Algorithm and Artificial Neural Network

    PubMed Central

    Ahmed, Afaz Uddin; Tariqul Islam, Mohammad; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation. PMID:25133214

  20. A novel user classification method for femtocell network by using affinity propagation algorithm and artificial neural network.

    PubMed

    Ahmed, Afaz Uddin; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation.

  1. Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking

    PubMed Central

    Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun

    2017-01-01

    Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876

  2. Toward an Optimal Pedagogy for Teamwork.

    PubMed

    Earnest, Mark A; Williams, Jason; Aagaard, Eva M

    2017-10-01

    Teamwork and collaboration are increasingly listed as core competencies for undergraduate health professions education. Despite the clear mandate for teamwork training, the optimal method for providing that training is much less certain. In this Perspective, the authors propose a three-level classification of pedagogical approaches to teamwork training based on the presence of two key learning factors: interdependent work and explicit training in teamwork. In this classification framework, level 1-minimal team learning-is where learners work in small groups but neither of the key learning factors is present. Level 2-implicit team learning-engages learners in interdependent learning activities but does not include an explicit focus on teamwork. Level 3-explicit team learning-creates environments where teams work interdependently toward common goals and are given explicit instruction and practice in teamwork. The authors provide examples that demonstrate each level. They then propose that the third level of team learning, explicit team learning, represents a best practice approach in teaching teamwork, highlighting their experience with an explicit team learning course at the University of Colorado Anschutz Medical Campus. Finally, they discuss several challenges to implementing explicit team-learning-based curricula: the lack of a common teamwork model on which to anchor such a curriculum; the question of whether the knowledge, skills, and attitudes acquired during training would be transferable to the authentic clinical environment; and effectively evaluating the impact of explicit team learning.

  3. [Role of an educational-and-methodological complex in the optimization of teaching at the stage of additional professional education of physicians in the specialty "anesthesiology and reanimatology"].

    PubMed

    Buniatian, A A; Sizova, Zh M; Vyzhigina, M A; Shikh, E V

    2010-01-01

    An educational-and-methodological complex (EMC) in the specialty 'Anesthesiology and Reanimatology", which promotes manageability, flexibility, and dynamism of an educational process, is of great importance in solving the problem in the systematization of knowledge and its best learning by physicians at a stage of additional professional education (APE). EMC is a set of educational-and-methodological materials required to organize and hold an educational process for the advanced training of anesthesiologists and resuscitation specialists at the stage of APE. EMC includes a syllabus for training in the area "Anesthesiology and Reanimatology" by the appropriate training pattern (certification cycles, topical advanced training cycles); a work program for training in the specialty "Anesthesiology and Reanimatology"; a work curriculums for training in allied specialties (surgery, traumatology and orthopedics, obstetrics and gynecology, and pediatrics); work programs on basic disciplines (pharmacology, normal and pathological physiology, normal anatomy, chemistry and biology); working programs on the area "Public health care and health care service", guidelines for the teacher; educational-and-methodological materials for the student; and quiz programs. The main point of EMC in the specialty "Anesthesiology and Reanimatology" is a work program. Thus, educational-and-methodological and teaching materials included into the EMC in the specialty 'Anesthesiology and Reanimatology" should envisage the logically successive exposition of a teaching material, the use of currently available methods and educational facilities, which facilitates the optimization of training of anesthesiologists and resuscitation specialists at the stage of APE.

  4. Using Simple Environmental Variables to Estimate Biomass Disturbance

    DTIC Science & Technology

    2014-08-01

    ER D C/ CE RL T R- 14 -1 3 Optimal Allocation of Land for Training and Non-Training Uses ( OPAL ) Using Simple Environmental Variables to...Uses ( OPAL ) ERDC/CERL TR-14-13 August 2014 Using Simple Environmental Variables to Estimate Biomass Disturbance Natalie Myers, Daniel Koch...Development of the Optimal Allocation of Land for Training and Non-Training Uses ( OPAL ) Program was undertak- en to meet this need. This phase of work

  5. Leadership and Teamwork in Trauma and Resuscitation.

    PubMed

    Ford, Kelsey; Menchine, Michael; Burner, Elizabeth; Arora, Sanjay; Inaba, Kenji; Demetriades, Demetrios; Yersin, Bertrand

    2016-09-01

    Leadership skills are described by the American College of Surgeons' Advanced Trauma Life Support (ATLS) course as necessary to provide care for patients during resuscitations. However, leadership is a complex concept, and the tools used to assess the quality of leadership are poorly described, inadequately validated, and infrequently used. Despite its importance, dedicated leadership education is rarely part of physician training programs. The goals of this investigation were the following: 1. Describe how leadership and leadership style affect patient care; 2. Describe how effective leadership is measured; and 3. Describe how to train future physician leaders. We searched the PubMed database using the keywords "leadership" and then either "trauma" or "resuscitation" as title search terms, and an expert in emergency medicine and trauma then identified prospective observational and randomized controlled studies measuring leadership and teamwork quality. Study results were categorized as follows: 1) how leadership affects patient care; 2) which tools are available to measure leadership; and 3) methods to train physicians to become better leaders. We included 16 relevant studies in this review. Overall, these studies showed that strong leadership improves processes of care in trauma resuscitation including speed and completion of the primary and secondary surveys. The optimal style and structure of leadership are influenced by patient characteristics and team composition. Directive leadership is most effective when Injury Severity Score (ISS) is high or teams are inexperienced, while empowering leadership is most effective when ISS is low or teams more experienced. Many scales were employed to measure leadership. The Leader Behavior Description Questionnaire (LBDQ) was the only scale used in more than one study. Seven studies described methods for training leaders. Leadership training programs included didactic teaching followed by simulations. Although programs differed in length, intensity, and training level of participants, all programs demonstrated improved team performance. Despite the relative paucity of literature on leadership in resuscitations, this review found leadership improves processes of care in trauma and can be enhanced through dedicated training. Future research is needed to validate leadership assessment scales, develop optimal training mechanisms, and demonstrate leadership's effect on patient-level outcome.

  6. Prediction and Optimization of Key Performance Indicators in the Production of Stator Core Using a GA-NN Approach

    NASA Astrophysics Data System (ADS)

    Rajora, M.; Zou, P.; Xu, W.; Jin, L.; Chen, W.; Liang, S. Y.

    2017-12-01

    With the rapidly changing demands of the manufacturing market, intelligent techniques are being used to solve engineering problems due to their ability to handle nonlinear complex problems. For example, in the conventional production of stator cores, it is relied upon experienced engineers to make an initial plan on the number of compensation sheets to be added to achieve uniform pressure distribution throughout the laminations. Additionally, these engineers must use their experience to revise the initial plans based upon the measurements made during the production of stator core. However, this method yields inconsistent results as humans are incapable of storing and analysing large amounts of data. In this article, first, a Neural Network (NN), trained using a hybrid Levenberg-Marquardt (LM) - Genetic Algorithm (GA), is developed to assist the engineers with the decision-making process. Next, the trained NN is used as a fitness function in an optimization algorithm to find the optimal values of the initial compensation sheet plan with the aim of minimizing the required revisions during the production of the stator core.

  7. Neonatal Seizure Detection Using Deep Convolutional Neural Networks.

    PubMed

    Ansari, Amir H; Cherian, Perumpillichira J; Caicedo, Alexander; Naulaers, Gunnar; De Vos, Maarten; Van Huffel, Sabine

    2018-04-02

    Identifying a core set of features is one of the most important steps in the development of an automated seizure detector. In most of the published studies describing features and seizure classifiers, the features were hand-engineered, which may not be optimal. The main goal of the present paper is using deep convolutional neural networks (CNNs) and random forest to automatically optimize feature selection and classification. The input of the proposed classifier is raw multi-channel EEG and the output is the class label: seizure/nonseizure. By training this network, the required features are optimized, while fitting a nonlinear classifier on the features. After training the network with EEG recordings of 26 neonates, five end layers performing the classification were replaced with a random forest classifier in order to improve the performance. This resulted in a false alarm rate of 0.9 per hour and seizure detection rate of 77% using a test set of EEG recordings of 22 neonates that also included dubious seizures. The newly proposed CNN classifier outperformed three data-driven feature-based approaches and performed similar to a previously developed heuristic method.

  8. Effectiveness of an Individualized Training Based on Force-Velocity Profiling during Jumping

    PubMed Central

    Jiménez-Reyes, Pedro; Samozino, Pierre; Brughelli, Matt; Morin, Jean-Benoît

    2017-01-01

    Ballistic performances are determined by both the maximal lower limb power output (Pmax) and their individual force-velocity (F-v) mechanical profile, especially the F-v imbalance (FVimb): difference between the athlete's actual and optimal profile. An optimized training should aim to increase Pmax and/or reduce FVimb. The aim of this study was to test whether an individualized training program based on the individual F-v profile would decrease subjects' individual FVimb and in turn improve vertical jump performance. FVimb was used as the reference to assign participants to different training intervention groups. Eighty four subjects were assigned to three groups: an “optimized” group divided into velocity-deficit, force-deficit, and well-balanced sub-groups based on subjects' FVimb, a “non-optimized” group for which the training program was not specifically based on FVimb and a control group. All subjects underwent a 9-week specific resistance training program. The programs were designed to reduce FVimb for the optimized groups (with specific programs for sub-groups based on individual FVimb values), while the non-optimized group followed a classical program exactly similar for all subjects. All subjects in the three optimized training sub-groups (velocity-deficit, force-deficit, and well-balanced) increased their jumping performance (12.7 ± 5.7% ES = 0.93 ± 0.09, 14.2 ± 7.3% ES = 1.00 ± 0.17, and 7.2 ± 4.5% ES = 0.70 ± 0.36, respectively) with jump height improvement for all subjects, whereas the results were much more variable and unclear in the non-optimized group. This greater change in jump height was associated with a markedly reduced FVimb for both force-deficit (57.9 ± 34.7% decrease in FVimb) and velocity-deficit (20.1 ± 4.3%) subjects, and unclear or small changes in Pmax (−0.40 ± 8.4% and +10.5 ± 5.2%, respectively). An individualized training program specifically based on FVimb (gap between the actual and optimal F-v profiles of each individual) was more efficient at improving jumping performance (i.e., unloaded squat jump height) than a traditional resistance training common to all subjects regardless of their FVimb. Although improving both FVimb and Pmax has to be considered to improve ballistic performance, the present results showed that reducing FVimb without even increasing Pmax lead to clearly beneficial jump performance changes. Thus, FVimb could be considered as a potentially useful variable for prescribing optimal resistance training to improve ballistic performance. PMID:28119624

  9. Empowering Education: A New Model for In-service Training of Nursing Staff.

    PubMed

    Chaghari, Mahmud; Saffari, Mohsen; Ebadi, Abbas; Ameryoun, Ahmad

    2017-01-01

    In-service training of nurses plays an indispensable role in improving the quality of inpatient care. Need to enhance the effectiveness of in-service training of nurses is an inevitable requirement. This study attempted to design a new optimal model for in-service training of nurses. This qualitative study was conducted in two stages during 2015-2016. In the first stage, the Grounded Theory was adopted to explore the process of training 35 participating nurses. The sampling was initially purposeful and then theoretically based on emerging concept. Data were collected through interview, observation and field notes. Moreover, the data were analyzed through Corbin-Strauss method and the data were coded through MAXQDA-10. In the second stage, the findings were employed through 'Walker and Avants strategy for theory construction so as to design an optimal model for in-service training of nursing staff. In the first stage, there were five major themes including unsuccessful mandatory education, empowering education, organizational challenges of education, poor educational management, and educational-occupational resiliency. Empowering education was the core variable derived from the research, based on which a grounded theory was proposed. The new empowering education model was composed of self-directed learning and practical learning. There are several strategies to achieve empowering education, including the fostering of searching skills, clinical performance monitoring, motivational factors, participation in the design and implementation, and problem-solving approach. Empowering education is a new model for in-service training of nurses, which matches the training programs with andragogical needs and desirability of learning among the staff. Owing to its practical nature, the empowering education can facilitate occupational tasks and achieving greater mastery of professional skills among the nurses.

  10. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820

  11. Baseline estimation in flame's spectra by using neural networks and robust statistics

    NASA Astrophysics Data System (ADS)

    Garces, Hugo; Arias, Luis; Rojas, Alejandro

    2014-09-01

    This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.

  12. QSAR models for thiophene and imidazopyridine derivatives inhibitors of the Polo-Like Kinase 1.

    PubMed

    Comelli, Nieves C; Duchowicz, Pablo R; Castro, Eduardo A

    2014-10-01

    The inhibitory activity of 103 thiophene and 33 imidazopyridine derivatives against Polo-Like Kinase 1 (PLK1) expressed as pIC50 (-logIC50) was predicted by QSAR modeling. Multivariate linear regression (MLR) was employed to model the relationship between 0D and 3D molecular descriptors and biological activities of molecules using the replacement method (MR) as variable selection tool. The 136 compounds were separated into several training and test sets. Two splitting approaches, distribution of biological data and structural diversity, and the statistical experimental design procedure D-optimal distance were applied to the dataset. The significance of the training set models was confirmed by statistically higher values of the internal leave one out cross-validated coefficient of determination (Q2) and external predictive coefficient of determination for the test set (Rtest2). The model developed from a training set, obtained with the D-optimal distance protocol and using 3D descriptor space along with activity values, separated chemical features that allowed to distinguish high and low pIC50 values reasonably well. Then, we verified that such model was sufficient to reliably and accurately predict the activity of external diverse structures. The model robustness was properly characterized by means of standard procedures and their applicability domain (AD) was analyzed by leverage method. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A New Artificial Neural Network Enhanced by the Shuffled Complex Evolution Optimization with Principal Component Analysis (SP-UCI) for Water Resources Management

    NASA Astrophysics Data System (ADS)

    Hayatbini, N.; Faridzad, M.; Yang, T.; Akbari Asanjan, A.; Gao, X.; Sorooshian, S.

    2016-12-01

    The Artificial Neural Networks (ANNs) are useful in many fields, including water resources engineering and management. However, due to the non-linear and chaotic characteristics associated with natural processes and human decision making, the use of ANNs in real-world applications is still limited, and its performance needs to be further improved for a broader practical use. The commonly used Back-Propagation (BP) scheme and gradient-based optimization in training the ANNs have already found to be problematic in some cases. The BP scheme and gradient-based optimization methods are associated with the risk of premature convergence, stuck in local optimums, and the searching is highly dependent on initial conditions. Therefore, as an alternative to BP and gradient-based searching scheme, we propose an effective and efficient global searching method, termed the Shuffled Complex Evolutionary Global optimization algorithm with Principal Component Analysis (SP-UCI), to train the ANN connectivity weights. Large number of real-world datasets are tested with the SP-UCI-based ANN, as well as various popular Evolutionary Algorithms (EAs)-enhanced ANNs, i.e., Particle Swarm Optimization (PSO)-, Genetic Algorithm (GA)-, Simulated Annealing (SA)-, and Differential Evolution (DE)-enhanced ANNs. Results show that SP-UCI-enhanced ANN is generally superior over other EA-enhanced ANNs with regard to the convergence and computational performance. In addition, we carried out a case study for hydropower scheduling in the Trinity Lake in the western U.S. In this case study, multiple climate indices are used as predictors for the SP-UCI-enhanced ANN. The reservoir inflows and hydropower releases are predicted up to sub-seasonal to seasonal scale. Results show that SP-UCI-enhanced ANN is able to achieve better statistics than other EAs-based ANN, which implies the usefulness and powerfulness of proposed SP-UCI-enhanced ANN for reservoir operation, water resources engineering and management. The SP-UCI-enhanced ANN is universally applicable to many other regression and prediction problems, and it has a good potential to be an alternative to the classical BP scheme and gradient-based optimization methods.

  14. Detection of fatigue cracks by nondestructive testing methods

    NASA Technical Reports Server (NTRS)

    Anderson, R. T.; Delacy, T. J.; Stewart, R. C.

    1973-01-01

    The effectiveness was assessed of various NDT methods to detect small tight cracks by randomly introducing fatigue cracks into aluminum sheets. The study included optimizing NDT methods calibrating NDT equipment with fatigue cracked standards, and evaluating a number of cracked specimens by the optimized NDT methods. The evaluations were conducted by highly trained personnel, provided with detailed procedures, in order to minimize the effects of human variability. These personnel performed the NDT on the test specimens without knowledge of the flaw locations and reported on the flaws detected. The performance of these tests was measured by comparing the flaws detected against the flaws present. The principal NDT methods utilized were radiographic, ultrasonic, penetrant, and eddy current. Holographic interferometry, acoustic emission monitoring, and replication methods were also applied on a reduced number of specimens. Generally, the best performance was shown by eddy current, ultrasonic, penetrant and holographic tests. Etching provided no measurable improvement, while proof loading improved flaw detectability. Data are shown that quantify the performances of the NDT methods applied.

  15. Effects of number of training generations on genomic prediction for various traits in a layer chicken population.

    PubMed

    Weng, Ziqing; Wolc, Anna; Shen, Xia; Fernando, Rohan L; Dekkers, Jack C M; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Garrick, Dorian J

    2016-03-19

    Genomic estimated breeding values (GEBV) based on single nucleotide polymorphism (SNP) genotypes are widely used in animal improvement programs. It is typically assumed that the larger the number of animals is in the training set, the higher is the prediction accuracy of GEBV. The aim of this study was to quantify genomic prediction accuracy depending on the number of ancestral generations included in the training set, and to determine the optimal number of training generations for different traits in an elite layer breeding line. Phenotypic records for 16 traits on 17,793 birds were used. All parents and some selection candidates from nine non-overlapping generations were genotyped for 23,098 segregating SNPs. An animal model with pedigree relationships (PBLUP) and the BayesB genomic prediction model were applied to predict EBV or GEBV at each validation generation (progeny of the most recent training generation) based on varying numbers of immediately preceding ancestral generations. Prediction accuracy of EBV or GEBV was assessed as the correlation between EBV and phenotypes adjusted for fixed effects, divided by the square root of trait heritability. The optimal number of training generations that resulted in the greatest prediction accuracy of GEBV was determined for each trait. The relationship between optimal number of training generations and heritability was investigated. On average, accuracies were higher with the BayesB model than with PBLUP. Prediction accuracies of GEBV increased as the number of closely-related ancestral generations included in the training set increased, but reached an asymptote or slightly decreased when distant ancestral generations were used in the training set. The optimal number of training generations was 4 or more for high heritability traits but less than that for low heritability traits. For less heritable traits, limiting the training datasets to individuals closely related to the validation population resulted in the best predictions. The effect of adding distant ancestral generations in the training set on prediction accuracy differed between traits and the optimal number of necessary training generations is associated with the heritability of traits.

  16. Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids

    NASA Astrophysics Data System (ADS)

    Tan, Maojin; Wang, Peng; Mao, Keyu

    2014-04-01

    Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.

  17. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer

    PubMed Central

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463

  18. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer.

    PubMed

    Rani R, Hannah Jessie; Victoire T, Aruldoss Albert

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.

  19. Can Subjects be Guided to Optimal Decisions The Use of a Real-Time Training Intervention Model

    DTIC Science & Technology

    2016-06-01

    execution of the task and may then be analyzed to determine if there is correlation between designated factors (scores, proportion of time in each...state with their decision performance in real time could allow training systems to be designed to tailor training to the individual decision maker...release; distribution is unlimited CAN SUBJECTS BE GUIDED TO OPTIMAL DECISIONS? THE USE OF A REAL- TIME TRAINING INTERVENTION MODEL by Travis D

  20. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  1. Evolutionary programming-based univector field navigation method for past mobile robots.

    PubMed

    Kim, Y J; Kim, J H; Kwon, D S

    2001-01-01

    Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.

  2. Building Energy Modeling and Control Methods for Optimization and Renewables Integration

    NASA Astrophysics Data System (ADS)

    Burger, Eric M.

    This dissertation presents techniques for the numerical modeling and control of building systems, with an emphasis on thermostatically controlled loads. The primary objective of this work is to address technical challenges related to the management of energy use in commercial and residential buildings. This work is motivated by the need to enhance the performance of building systems and by the potential for aggregated loads to perform load following and regulation ancillary services, thereby enabling the further adoption of intermittent renewable energy generation technologies. To increase the generalizability of the techniques, an emphasis is placed on recursive and adaptive methods which minimize the need for customization to specific buildings and applications. The techniques presented in this dissertation can be divided into two general categories: modeling and control. Modeling techniques encompass the processing of data streams from sensors and the training of numerical models. These models enable us to predict the energy use of a building and of sub-systems, such as a heating, ventilation, and air conditioning (HVAC) unit. Specifically, we first present an ensemble learning method for the short-term forecasting of total electricity demand in buildings. As the deployment of intermittent renewable energy resources continues to rise, the generation of accurate building-level electricity demand forecasts will be valuable to both grid operators and building energy management systems. Second, we present a recursive parameter estimation technique for identifying a thermostatically controlled load (TCL) model that is non-linear in the parameters. For TCLs to perform demand response services in real-time markets, online methods for parameter estimation are needed. Third, we develop a piecewise linear thermal model of a residential building and train the model using data collected from a custom-built thermostat. This model is capable of approximating unmodeled dynamics within a building by learning from sensor data. Control techniques encompass the application of optimal control theory, model predictive control, and convex distributed optimization to TCLs. First, we present the alternative control trajectory (ACT) representation, a novel method for the approximate optimization of non-convex discrete systems. This approach enables the optimal control of a population of non-convex agents using distributed convex optimization techniques. Second, we present a distributed convex optimization algorithm for the control of a TCL population. Experimental results demonstrate the application of this algorithm to the problem of renewable energy generation following. This dissertation contributes to the development of intelligent energy management systems for buildings by presenting a suite of novel and adaptable modeling and control techniques. Applications focus on optimizing the performance of building operations and on facilitating the integration of renewable energy resources.

  3. Program to study optimal protocol for cardiovascular and muscular efficiency. [physical fitness training for manned space flight

    NASA Technical Reports Server (NTRS)

    Olree, H. D.

    1974-01-01

    Training programs necessary for the development of optimal strength during prolonged manned space flight were examined, and exercises performed on the Super Mini Gym Skylab 2 were compared with similar exercises on the Universal Gym and calisthenics. Cardiopulmonary gains were found negligible but all training groups exhibited good gains in strength.

  4. Advanced Proficiency EHR Training: Effect on Physicians’ EHR Efficiency, EHR Satisfaction and Job Satisfaction

    PubMed Central

    Dastagir, M. Tariq; Chin, Homer L.; McNamara, Michael; Poteraj, Kathy; Battaglini, Sarah; Alstot, Lauren

    2012-01-01

    The best way to train clinicians to optimize their use of the Electronic Health Record (EHR) remains unclear. Approaches range from web-based training, class-room training, EHR functionality training, case-based training, role-based training, process-based training, mock-clinic training and “on the job” training. Similarly, the optimal timing of training remains unclear--whether to engage in extensive pre go-live training vs. minimal pre go-live training followed by more extensive post go-live training. In addition, the effectiveness of non-clinician trainers, clinician trainers, and peer-trainers, remains unclearly defined. This paper describes a program in which relatively experienced clinician users of an EHR underwent an intensive 3-day Peer-Led EHR advanced proficiency training, and the results of that training based on participant surveys. It highlights the effectiveness of Peer-Led Proficiency Training of existing experienced clinician EHR users in improving self-reported efficiency and satisfaction with an EHR and improvements in perceived work-life balance and job satisfaction. PMID:23304282

  5. Advanced proficiency EHR training: effect on physicians' EHR efficiency, EHR satisfaction and job satisfaction.

    PubMed

    Dastagir, M Tariq; Chin, Homer L; McNamara, Michael; Poteraj, Kathy; Battaglini, Sarah; Alstot, Lauren

    2012-01-01

    The best way to train clinicians to optimize their use of the Electronic Health Record (EHR) remains unclear. Approaches range from web-based training, class-room training, EHR functionality training, case-based training, role-based training, process-based training, mock-clinic training and "on the job" training. Similarly, the optimal timing of training remains unclear--whether to engage in extensive pre go-live training vs. minimal pre go-live training followed by more extensive post go-live training. In addition, the effectiveness of non-clinician trainers, clinician trainers, and peer-trainers, remains unclearly defined. This paper describes a program in which relatively experienced clinician users of an EHR underwent an intensive 3-day Peer-Led EHR advanced proficiency training, and the results of that training based on participant surveys. It highlights the effectiveness of Peer-Led Proficiency Training of existing experienced clinician EHR users in improving self-reported efficiency and satisfaction with an EHR and improvements in perceived work-life balance and job satisfaction.

  6. Sci-Thur AM: YIS – 05: Prediction of lung tumor motion using a generalized neural network optimized from the average prediction outcome of a group of patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teo, Troy; Alayoubi, Nadia; Bruce, Neil

    Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons formore » the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.« less

  7. The criteria of optimization of training specialists for the nuclear power industry and its implementation in the educational process

    NASA Astrophysics Data System (ADS)

    Lavrinenko, S. V.; Polikarpov, P. I.

    2017-11-01

    The nuclear industry is one of the most important and high-tech spheres of human activity in Russia. The main cause of accidents in the nuclear industry is the human factor. In this connection, the need to constantly analyze the system of training of specialists and its optimization in order to improve safety at nuclear industry enterprises. To do this, you must analyze the international experience in the field of training in the field of nuclear energy leading countries. Based on the analysis criteria have been formulated to optimize the educational process of training specialists for the nuclear power industry and test their effectiveness. The most effective and promising is the introduction of modern information technologies of training of students, such as real-time simulators, electronic educational resources, etc.

  8. Gait training strategies to optimize walking ability in people with stroke: A synthesis of the evidence

    PubMed Central

    Tang, Pei Fang

    2011-01-01

    Stroke is a leading cause of long-term disability. Impairments resulting from stroke lead to persistent difficulties with walking and subsequently, improved walking ability is one of the highest priorities for people living with a stroke. In addition, walking ability has important health implications in providing protective effects against secondary complications common after a stroke such as heart disease or osteoporosis. This paper systematically reviews common gait training strategies (neurodevelopmental techniques, muscle strengthening, treadmill training, intensive mobility exercises) to improve walking ability. The results (descriptive summaries as well as pooled effect sizes) from randomized controlled trials are presented and implications for optimal gait training strategies are discussed. Novel and emerging gait training strategies are highlighted and research directions proposed to enable the optimal recovery and maintenance of walking ability. PMID:17939776

  9. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training

    PubMed Central

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R.

    2015-01-01

    Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines. PMID:26417378

  10. A Review of Hazard Anticipation Training Programs for Young Drivers

    PubMed Central

    McDonald, Catherine C.; Goodwin, Arthur H.; Pradhan, Anuj K.; Romoser, Matthew R.E.; Williams, Allan F.

    2015-01-01

    Purpose Poor hazard anticipation skills are a risk factor associated with high motor vehicle crash rates of young drivers. A number of programs have been developed to improve these skills. The purpose of this review was to assess the empirical literature on hazard anticipation training for young drivers. Methods Studies were included if they: 1) included an assessment of hazard anticipation training outcomes; 2) were published between January 1, 1980 and December 31, 2013 in an English language peer-reviewed journal or conference proceeding; and 3) included at least one group that uniquely comprised a cohort of participants <21 years. Nineteen studies met inclusion criteria. Results Studies used a variety of training methods including interactive computer programs, videos, simulation, commentary driving, or a combination of approaches. Training effects were predominantly measured through computer-based testing and driving simulation with eye tracking. Four studies included an on-road evaluation. Most studies evaluated short-term outcomes (immediate or few days). In all studies, young drivers showed improvement in selected hazard anticipation outcomes, but none investigated crash effects. Conclusions Although there is promise in existing programs, future research should include long-term follow up, evaluate crash outcomes, and assess the optimal timing of hazard anticipation training taking into account the age and experience level of young drivers. PMID:26112734

  11. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions

    PubMed Central

    Box, Simon

    2014-01-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570

  12. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    PubMed

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  13. IPO: a tool for automated optimization of XCMS parameters.

    PubMed

    Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph

    2015-04-16

    Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .

  14. Noninvasive extraction of fetal electrocardiogram based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Fu, Yumei; Xiang, Shihan; Chen, Tianyi; Zhou, Ping; Huang, Weiyan

    2015-10-01

    The fetal electrocardiogram (FECG) signal has important clinical value for diagnosing the fetal heart diseases and choosing suitable therapeutics schemes to doctors. So, the noninvasive extraction of FECG from electrocardiogram (ECG) signals becomes a hot research point. A new method, the Support Vector Machine (SVM) is utilized for the extraction of FECG with limited size of data. Firstly, the theory of the SVM and the principle of the extraction based on the SVM are studied. Secondly, the transformation of maternal electrocardiogram (MECG) component in abdominal composite signal is verified to be nonlinear and fitted with the SVM. Then, the SVM is trained, and the training results are compared with the real data to ensure the effect of the training. Meanwhile, the parameters of the SVM are optimized to achieve the best performance so that the learning machine can be utilized to fit the unknown samples. Finally, the FECG is extracted by removing the optimal estimation of MECG component from the abdominal composite signal. In order to evaluate the performance of FECG extraction based on the SVM, the Signal-to-Noise Ratio (SNR) and the visual test are used. The experimental results show that the FECG with good quality can be extracted, its SNR ratio is significantly increased as high as 9.2349 dB and the time cost is significantly decreased as short as 0.802 seconds. Compared with the traditional method, the noninvasive extraction method based on the SVM has a simple realization, the shorter treatment time and the better extraction quality under the same conditions.

  15. Optimal design of the absolute positioning sensor for a high-speed maglev train and research on its fault diagnosis.

    PubMed

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  16. Application of Convolutional Neural Network in Classification of High Resolution Agricultural Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.

    2017-09-01

    With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  17. EPA Optimal Corrosion Control Treatment Regional Training Workshops

    EPA Pesticide Factsheets

    EPA is hosting face-to-face regional training workshops throughout 2016-2017 on optimal corrosion control treatment (OCCT). These will be held at each of the Regions and is intended for primacy agency staff and technical assistance providers.

  18. [Evaluating a blended-learning program on developing teamwork competence].

    PubMed

    Aguado, David; Arranz, Virginia; Valera-Rubio, Ana; Marín-Torres, Susana

    2011-08-01

    The knowledge, skills and abilities that are required to work optimally in teams are critical for many types of work. Organizations can provide access to these skills by means of training programs. Diverse studies show how traditional in-site training methodologies can improve teamwork knowledge, skills and abilities. Nevertheless, in-site methods can be complemented with on-line strategies that result in blended-learning programs. The aim of this work is to analyze, following Kirkpatrick's assessment levels, the effectiveness of a blended-learning program of teamwork training in an organizational context. Carried out with 102 professionals, the results show participants' satisfaction with the program, high level of learning (of both declarative and procedural knowledge), and a moderate level of transfer of learning to the job.

  19. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  20. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  1. PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine

    PubMed Central

    Manavalan, Balachandran; Shin, Tae H.; Lee, Gwang

    2018-01-01

    Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html. PMID:29616000

  2. PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine.

    PubMed

    Manavalan, Balachandran; Shin, Tae H; Lee, Gwang

    2018-01-01

    Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html.

  3. Optimizing Eating Performance for Older Adults With Dementia Living in Long-term Care: A Systematic Review.

    PubMed

    Liu, Wen; Galik, Elizabeth; Boltz, Marie; Nahm, Eun-Shim; Resnick, Barbara

    2015-08-01

    Review of research to date has been focusing on maintaining weight and nutrition with little attention on optimizing eating performance. To evaluate the effectiveness of interventions on eating performance for older adults with dementia in long-term care (LTC). A systematic review was performed. Five databases including Pubmed, Medline (OVID), EBM Reviews (OVID), PsychINFO (OVID), and CINAHL (EBSCOHost) were searched between January 1980 and June 2014. Keywords included dementia, Alzheimer, feed(ing), eat(ing), mealtime(s), oral intake, autonomy, and intervention. Intervention studies that optimize eating performance and evaluate change of self-feeding or eating performance among older adults (≥65 years) with dementia in LTC were eligible. Studies were screened by title and abstract, and full texts were reviewed for eligibility. Eligible studies were classified by intervention type. Study quality was accessed using the Quality Assessment Tool for Quantitative Studies, and level of evidence using the 2011 Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence. Eleven intervention studies (five randomized controlled trials [RCTs]) were identified, and classified into four types: training program, mealtime assistance, environmental modification, and multicomponent intervention. The quality of the 11 studies was generally moderate (four studies were rated as strong, four moderate, and three weak in quality), with the main threats as weak designs, lack of blinding and control for confounders, and inadequate psychometric evidence for measures. Training programs targeting older adults (Montessori methods and spaced retrieval) demonstrated good evidence in decreasing feeding difficulty. Mealtime assistance offered by nursing staff (e.g., verbal prompts and cues, positive reinforcement, appropriate praise and encouragement) also showed effectiveness in improving eating performance. This review provided preliminary support for using training and mealtime assistance to optimize eating performance for older adults with dementia in LTC. Future effectiveness studies may focus on training nursing caregivers as interventionists, lengthening intervention duration, and including residents with varying levels of cognitive impairment in diverse cultures. The effectiveness of training combined with mealtime assistance may also be tested to achieve better resident outcomes in eating performance. © 2015 Sigma Theta Tau International.

  4. Support vector machine multiuser receiver for DS-CDMA signals in multipath channels.

    PubMed

    Chen, S; Samingan, A K; Hanzo, L

    2001-01-01

    The problem of constructing an adaptive multiuser detector (MUD) is considered for direct sequence code division multiple access (DS-CDMA) signals transmitted through multipath channels. The emerging learning technique, called support vector machines (SVM), is proposed as a method of obtaining a nonlinear MUD from a relatively small training data block. Computer simulation is used to study this SVM MUD, and the results show that it can closely match the performance of the optimal Bayesian one-shot detector. Comparisons with an adaptive radial basis function (RBF) MUD trained by an unsupervised clustering algorithm are discussed.

  5. Design of neural networks for classification of remotely sensed imagery

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Cromp, Robert F.; Birmingham, Mark

    1992-01-01

    Classification accuracies of a backpropagation neural network are discussed and compared with a maximum likelihood classifier (MLC) with multivariate normal class models. We have found that, because of its nonparametric nature, the neural network outperforms the MLC in this area. In addition, we discuss techniques for constructing optimal neural nets on parallel hardware like the MasPar MP-1 currently at GSFC. Other important discussions are centered around training and classification times of the two methods, and sensitivity to the training data. Finally, we discuss future work in the area of classification and neural nets.

  6. Artificial neural systems for interpretation and inversion of seismic data

    NASA Astrophysics Data System (ADS)

    Calderon-Macias, Carlos

    The goal of this work is to investigate the feasibility of using neural network (NN) models for solving geophysical exploration problems. First, a feedforward neural network (FNN) is used to solve inverse problems. The operational characteristics of a FNN are primarily controlled by a set of weights and a nonlinear function that performs a mapping between two sets of data. In a process known as training, the FNN weights are iteratively adjusted to perform the mapping. After training, the computed weights encode important features of the data that enable one pattern to be distinguished from another. Synthetic data computed from an ensemble of earth models and the corresponding models provide the training data. Two training methods are studied: the backpropagation method which is a gradient scheme, and a global optimization method called very fast simulated annealing (VFSA). A trained network is then used to predict models from new data (e.g., data from a new location) in a one-step procedure. The application of this method to the problems of obtaining formation resistivities and layer thicknesses from resistivity sounding data and 1D velocity models from seismic data shows that trained FNNs produce reasonably accurate earth models when observed data are input to the FNNs. In a second application, a FNN is used for automating the NMO correction process of seismic reflection data. The task of the FNN is to map CMP data at control locations along a seismic line into subsurface velocities. The network is trained while the velocity analyses are performed at the control locations. Once trained, the computed weights are used as an operator that acts on the remaining CMP data as a velocity interpolator, resulting in a fast method for NMO correction. The second part of this dissertation describes the application of a Hopfield neural network (HNN) to the problems of deconvolution and multiple attenuation. In these applications, the unknown parameters (reflection coefficients and source wavelet in the first problem and an operator in the second) are mapped as neurons of the HNN. The proposed deconvolution method attempts to reproduce the data with a limited number of events. The multiple attenuation method resembles the predictive deconvolution method. Results of this method are compared with a multiple elimination method based on estimating the source wavelet from the seismic data.

  7. The maximum vector-angular margin classifier and its fast training on large datasets using a core vector machine.

    PubMed

    Hu, Wenjun; Chung, Fu-Lai; Wang, Shitong

    2012-03-01

    Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n2)(or even up to O(n3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the maximum vector-angular margin classifier (MAMC) is first proposed based on the vector-angular margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin ρ, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e., it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a maximum vector-angular margin core vector machine (MAMCVM) by connecting the core vector machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, X; Wang, J; Hu, W

    Purpose: The Varian RapidPlan™ is a commercial knowledge-based optimization process which uses a set of clinically used treatment plans to train a model that can predict individualized dose-volume objectives. The purpose of this study is to evaluate the performance of RapidPlan to generate intensity modulated radiation therapy (IMRT) plans for cervical cancer. Methods: Totally 70 IMRT plans for cervical cancer with varying clinical and physiological indications were enrolled in this study. These patients were all previously treated in our institution. There were two prescription levels usually used in our institution: 45Gy/25 fractions and 50.4Gy/28 fractions. 50 of these plans weremore » selected to train the RapidPlan model for predicting dose-volume constraints. After model training, this model was validated with 10 plans from training pool(internal validation) and additional other 20 new plans(external validation). All plans used for the validation were re-optimized with the original beam configuration and the generated priorities from RapidPlan were manually adjusted to ensure that re-optimized DVH located in the range of the model prediction. DVH quantitative analysis was performed to compare the RapidPlan generated and the original manual optimized plans. Results: For all the validation cases, RapidPlan based plans (RapidPlan) showed similar or superior results compared to the manual optimized ones. RapidPlan increased the result of D98% and homogeneity in both two validations. For organs at risk, the RapidPlan decreased mean doses of bladder by 1.25Gy/1.13Gy (internal/external validation) on average, with p=0.12/p<0.01. The mean dose of rectum and bowel were also decreased by an average of 2.64Gy/0.83Gy and 0.66Gy/1.05Gy,with p<0.01/ p<0.01and p=0.04/<0.01 for the internal/external validation, respectively. Conclusion: The RapidPlan model based cervical cancer plans shows ability to systematically improve the IMRT plan quality. It suggests that RapidPlan has great potential to make the treatment planning process more efficient.« less

  9. Testing the Limits of Optimizing Dual-Task Performance in Younger and Older Adults

    PubMed Central

    Strobach, Tilo; Frensch, Peter; Müller, Herrmann Josef; Schubert, Torsten

    2012-01-01

    Impaired dual-task performance in younger and older adults can be improved with practice. Optimal conditions even allow for a (near) elimination of this impairment in younger adults. However, it is unknown whether such (near) elimination is the limit of performance improvements in older adults. The present study tests this limit in older adults under conditions of (a) a high amount of dual-task training and (b) training with simplified component tasks in dual-task situations. The data showed that a high amount of dual-task training in older adults provided no evidence for an improvement of dual-task performance to the optimal dual-task performance level achieved by younger adults. However, training with simplified component tasks in dual-task situations exclusively in older adults provided a similar level of optimal dual-task performance in both age groups. Therefore through applying a testing the limits approach, we demonstrated that older adults improved dual-task performance to the same level as younger adults at the end of training under very specific conditions. PMID:22408613

  10. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults.

    PubMed

    Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.

  11. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  12. Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao

    2017-11-01

    Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.

  13. Challenging the Sacred Assumption: A Call for a Systemic Review of Army Aviation Maintenance

    DTIC Science & Technology

    2017-05-25

    structure , training, equipping and sustainment. Each study intends to optimize the force structure to achieve a balance between the modernization and...operational budgets. Since 1994, Army Aviation force structures , training resources, available equipment and aircraft have changed significantly. Yet...and are focused on force structure , training, equipping and sustainment. Each study intends to optimize the force structure to achieve a balance

  14. Neural networks for structural design - An integrated system implementation

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han

    1992-01-01

    The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.

  15. Saturation pulse design for quantitative myocardial T1 mapping.

    PubMed

    Chow, Kelvin; Kellman, Peter; Spottiswoode, Bruce S; Nielles-Vallespin, Sonia; Arai, Andrew E; Salerno, Michael; Thompson, Richard B

    2015-10-01

    Quantitative saturation-recovery based T1 mapping sequences are less sensitive to systematic errors than the Modified Look-Locker Inversion recovery (MOLLI) technique but require high performance saturation pulses. We propose to optimize adiabatic and pulse train saturation pulses for quantitative T1 mapping to have <1 % absolute residual longitudinal magnetization (|MZ/M0|) over ranges of B0 and [Formula: see text] (B1 scale factor) inhomogeneity found at 1.5 T and 3 T. Design parameters for an adiabatic BIR4-90 pulse were optimized for improved performance within 1.5 T B0 (±120 Hz) and [Formula: see text] (0.7-1.0) ranges. Flip angles in hard pulse trains of 3-6 pulses were optimized for 1.5 T and 3 T, with consideration of T1 values, field inhomogeneities (B0 = ±240 Hz and [Formula: see text]=0.4-1.2 at 3 T), and maximum achievable B1 field strength. Residual MZ/M0 was simulated and measured experimentally for current standard and optimized saturation pulses in phantoms and in-vivo human studies. T1 maps were acquired at 3 T in human subjects and a swine using a SAturation recovery single-SHot Acquisition (SASHA) technique with a standard 90°-90°-90° and an optimized 6-pulse train. Measured residual MZ/M0 in phantoms had excellent agreement with simulations over a wide range of B0 and [Formula: see text]. The optimized BIR4-90 reduced the maximum residual |MZ/M0| to <1 %, a 5.8× reduction compared to a reference BIR4-90. An optimized 3-pulse train achieved a maximum residual |MZ/M0| <1 % for the 1.5 T optimization range compared to 11.3 % for a standard 90°-90°-90° pulse train, while a 6-pulse train met this target for the wider 3 T ranges of B0 and [Formula: see text]. The 6-pulse train demonstrated more uniform saturation across both the myocardium and entire field of view than other saturation pulses in human studies. T1 maps were more spatially homogeneous with 6-pulse train SASHA than the reference 90°-90°-90° SASHA in both human and animal studies. Adiabatic and pulse train saturation pulses optimized for different constraints found at 1.5 T and 3 T achieved <1 % residual |MZ/M0| in phantom experiments, enabling greater accuracy in quantitative saturation recovery T1 imaging.

  16. Neuroprosthetic Decoder Training as Imitation Learning.

    PubMed

    Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P

    2016-05-01

    Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  17. Combining functional and structural tests improves the diagnostic accuracy of relevance vector machine classifiers

    PubMed Central

    Racette, Lyne; Chiou, Christine Y.; Hao, Jiucang; Bowd, Christopher; Goldbaum, Michael H.; Zangwill, Linda M.; Lee, Te-Won; Weinreb, Robert N.; Sample, Pamela A.

    2009-01-01

    Purpose To investigate whether combining optic disc topography and short-wavelength automated perimetry (SWAP) data improves the diagnostic accuracy of relevance vector machine (RVM) classifiers for detecting glaucomatous eyes compared to using each test alone. Methods One eye of 144 glaucoma patients and 68 healthy controls from the Diagnostic Innovations in Glaucoma Study were included. RVM were trained and tested with cross-validation on optimized (backward elimination) SWAP features (thresholds plus age; pattern deviation (PD); total deviation (TD)) and on Heidelberg Retina Tomograph II (HRT) optic disc topography features, independently and in combination. RVM performance was also compared to two HRT linear discriminant functions (LDF) and to SWAP mean deviation (MD) and pattern standard deviation (PSD). Classifier performance was measured by the area under the receiver operating characteristic curves (AUROCs) generated for each feature set and by the sensitivities at set specificities of 75%, 90% and 96%. Results RVM trained on combined HRT and SWAP thresholds plus age had significantly higher AUROC (0.93) than RVM trained on HRT (0.88) and SWAP (0.76) alone. AUROCs for the SWAP global indices (MD: 0.68; PSD: 0.72) offered no advantage over SWAP thresholds plus age, while the LDF AUROCs were significantly lower than RVM trained on the combined SWAP and HRT feature set and on HRT alone feature set. Conclusions Training RVM on combined optimized HRT and SWAP data improved diagnostic accuracy compared to training on SWAP and HRT parameters alone. Future research may identify other combinations of tests and classifiers that can also improve diagnostic accuracy. PMID:19528827

  18. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  19. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    NASA Astrophysics Data System (ADS)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.

  20. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    PubMed

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A training image evaluation and selection method based on minimum data event distance for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke

    2017-07-01

    A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.

  2. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  3. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  4. How Reliable is the Acetabular Cup Position Assessment from Routine Radiographs?

    PubMed Central

    Carvajal Alba, Jaime A.; Vincent, Heather K.; Sodhi, Jagdeep S.; Latta, Loren L.; Parvataneni, Hari K.

    2017-01-01

    Abstract Background: Cup position is crucial for optimal outcomes in total hip arthroplasty. Radiographic assessment of component position is routinely performed in the early postoperative period. Aims: The aims of this study were to determine in a controlled environment if routine radiographic methods accurately and reliably assess the acetabular cup position and to assess if there is a statistical difference related to the rater’s level of training. Methods: A pelvic model was mounted in a spatial frame. An acetabular cup was fixed in different degrees of version and inclination. Standardized radiographs were obtained. Ten observers including five fellowship-trained orthopaedic surgeons and five orthopaedic residents performed a blind assessment of cup position. Inclination was assessed from anteroposterior radiographs of the pelvis and version from cross-table lateral radiographs of the hip. Results: The radiographic methods used showed to be imprecise specially when the cup was positioned at the extremes of version and inclination. An excellent inter-observer reliability (Intra-class coefficient > 0,9) was evidenced. There were no differences related to the level of training of the raters. Conclusions: These widely used radiographic methods should be interpreted cautiously and computed tomography should be utilized in cases when further intervention is contemplated. PMID:28852355

  5. Feasibility of Active Machine Learning for Multiclass Compound Classification.

    PubMed

    Lang, Tobias; Flachsenberg, Florian; von Luxburg, Ulrike; Rarey, Matthias

    2016-01-25

    A common task in the hit-to-lead process is classifying sets of compounds into multiple, usually structural classes, which build the groundwork for subsequent SAR studies. Machine learning techniques can be used to automate this process by learning classification models from training compounds of each class. Gathering class information for compounds can be cost-intensive as the required data needs to be provided by human experts or experiments. This paper studies whether active machine learning can be used to reduce the required number of training compounds. Active learning is a machine learning method which processes class label data in an iterative fashion. It has gained much attention in a broad range of application areas. In this paper, an active learning method for multiclass compound classification is proposed. This method selects informative training compounds so as to optimally support the learning progress. The combination with human feedback leads to a semiautomated interactive multiclass classification procedure. This method was investigated empirically on 15 compound classification tasks containing 86-2870 compounds in 3-38 classes. The empirical results show that active learning can solve these classification tasks using 10-80% of the data which would be necessary for standard learning techniques.

  6. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    PubMed

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  7. Tolerance allocation for an electronic system using neural network/Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque

    2001-12-01

    The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.

  8. Multi-objective optimization of an industrial penicillin V bioreactor train using non-dominated sorting genetic algorithm.

    PubMed

    Lee, Fook Choon; Rangaiah, Gade Pandu; Ray, Ajay Kumar

    2007-10-15

    Bulk of the penicillin produced is used as raw material for semi-synthetic penicillin (such as amoxicillin and ampicillin) and semi-synthetic cephalosporins (such as cephalexin and cefadroxil). In the present paper, an industrial penicillin V bioreactor train is optimized for multiple objectives simultaneously. An industrial train, comprising a bank of identical bioreactors, is run semi-continuously in a synchronous fashion. The fermentation taking place in a bioreactor is modeled using a morphologically structured mechanism. For multi-objective optimization for two and three objectives, the elitist non-dominated sorting genetic algorithm (NSGA-II) is chosen. Instead of a single optimum as in the traditional optimization, a wide range of optimal design and operating conditions depicting trade-offs of key performance indicators such as batch cycle time, yield, profit and penicillin concentration, is successfully obtained. The effects of design and operating variables on the optimal solutions are discussed in detail. Copyright 2007 Wiley Periodicals, Inc.

  9. Decision tree methods: applications for classification and prediction.

    PubMed

    Song, Yan-Yan; Lu, Ying

    2015-04-25

    Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.

  10. Evolutionary game theory for physical and biological scientists. I. Training and validating population dynamics equations.

    PubMed

    Liao, David; Tlsty, Thea D

    2014-08-06

    Failure to understand evolutionary dynamics has been hypothesized as limiting our ability to control biological systems. An increasing awareness of similarities between macroscopic ecosystems and cellular tissues has inspired optimism that game theory will provide insights into the progression and control of cancer. To realize this potential, the ability to compare game theoretic models and experimental measurements of population dynamics should be broadly disseminated. In this tutorial, we present an analysis method that can be used to train parameters in game theoretic dynamics equations, used to validate the resulting equations, and used to make predictions to challenge these equations and to design treatment strategies. The data analysis techniques in this tutorial are adapted from the analysis of reaction kinetics using the method of initial rates taught in undergraduate general chemistry courses. Reliance on computer programming is avoided to encourage the adoption of these methods as routine bench activities.

  11. Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.

    PubMed

    Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu

    2016-11-01

    In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.

  12. Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.

    PubMed

    Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar

    2017-03-01

    We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.

  13. Efficient airport detection using region-based fully convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  14. Visualizing Motion Patterns in Acupuncture Manipulation.

    PubMed

    Lee, Ye-Seul; Jung, Won-Mo; Lee, In-Seon; Lee, Hyangsook; Park, Hi-Joon; Chae, Younbyoung

    2016-07-16

    Acupuncture manipulation varies widely among practitioners in clinical settings, and it is difficult to teach novice students how to perform acupuncture manipulation techniques skillfully. The Acupuncture Manipulation Education System (AMES) is an open source software system designed to enhance acupuncture manipulation skills using visual feedback. Using a phantom acupoint and motion sensor, our method for acupuncture manipulation training provides visual feedback regarding the actual movement of the student's acupuncture manipulation in addition to the optimal or intended movement, regardless of whether the manipulation skill is lifting, thrusting, or rotating. Our results show that students could enhance their manipulation skills by training using this method. This video shows the process of manufacturing phantom acupoints and discusses several issues that may require the attention of individuals interested in creating phantom acupoints or operating this system.

  15. Bearing Fault Diagnosis under Variable Speed Using Convolutional Neural Networks and the Stochastic Diagonal Levenberg-Marquardt Algorithm

    PubMed Central

    Tra, Viet; Kim, Jaeyoung; Kim, Jong-Myon

    2017-01-01

    This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs) trained via the stochastic diagonal Levenberg-Marquardt (S-DLM) algorithm. The CNNs utilize the spectral energy maps (SEMs) of the acoustic emission (AE) signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds. PMID:29211025

  16. Extinction of cue-evoked drug-seeking relies on degrading hierarchical instrumental expectancies

    PubMed Central

    Hogarth, Lee; Retzler, Chris; Munafò, Marcus R.; Tran, Dominic M.D.; Troisi, Joseph R.; Rose, Abigail K.; Jones, Andrew; Field, Matt

    2014-01-01

    There has long been need for a behavioural intervention that attenuates cue-evoked drug-seeking, but the optimal method remains obscure. To address this, we report three approaches to extinguish cue-evoked drug-seeking measured in a Pavlovian to instrumental transfer design, in non-treatment seeking adult smokers and alcohol drinkers. The results showed that the ability of a drug stimulus to transfer control over a separately trained drug-seeking response was not affected by the stimulus undergoing Pavlovian extinction training in experiment 1, but was abolished by the stimulus undergoing discriminative extinction training in experiment 2, and was abolished by explicit verbal instructions stating that the stimulus did not signal a more effective response-drug contingency in experiment 3. These data suggest that cue-evoked drug-seeking is mediated by a propositional hierarchical instrumental expectancy that the drug-seeking response is more likely to be rewarded in that stimulus. Methods which degraded this hierarchical expectancy were effective in the laboratory, and so may have therapeutic potential. PMID:25011113

  17. Monitoring and Managing Fatigue in Basketball

    PubMed Central

    Edwards, Toby; Spiteri, Tania; Piggott, Benjamin; Bonhotal, Joshua; Joyce, Christopher

    2018-01-01

    The sport of basketball exposes athletes to frequent high intensity movements including sprinting, jumping, accelerations, decelerations and changes of direction during training and competition which can lead to acute and accumulated chronic fatigue. Fatigue may affect the ability of the athlete to perform over the course of a lengthy season. The ability of practitioners to quantify the workload and subsequent fatigue in basketball athletes in order to monitor and manage fatigue levels may be beneficial in maintaining high levels of performance and preventing unfavorable physical and physiological training adaptations. There is currently limited research quantifying training or competition workload outside of time motion analysis in basketball. In addition, systematic research investigating methods to monitor and manage athlete fatigue in basketball throughout a season is scarce. To effectively optimize and maintain peak training and playing performance throughout a basketball season, potential workload and fatigue monitoring strategies need to be discussed. PMID:29910323

  18. Stability Training for Convolutional Neural Nets in LArTPC

    NASA Astrophysics Data System (ADS)

    Lindsay, Matt; Wongjirad, Taritree

    2017-01-01

    Convolutional Neural Nets (CNNs) are the state of the art for many problems in computer vision and are a promising method for classifying interactions in Liquid Argon Time Projection Chambers (LArTPCs) used in neutrino oscillation experiments. Despite the good performance of CNN's, they are not without drawbacks, chief among them is vulnerability to noise and small perturbations to the input. One solution to this problem is a modification to the learning process called Stability Training developed by Zheng et al. We verify existing work and demonstrate volatility caused by simple Gaussian noise and also that the volatility can be nearly eliminated with Stability Training. We then go further and show that a traditional CNN is also vulnerable to realistic experimental noise and that a stability trained CNN remains accurate despite noise. This further adds to the optimism for CNNs for work in LArTPCs and other applications.

  19. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  20. The Research of Software Engineering Curriculum Reform

    NASA Astrophysics Data System (ADS)

    Kuang, Li-Qun; Han, Xie

    With the problem that software engineering training can't meet the needs of the community, this paper analysis some outstanding reasons in software engineering curriculum teaching, such as old teaching contents, weak in practice and low quality of teachers etc. We propose the methods of teaching reform as guided by market demand, update the teaching content, optimize the teaching methods, reform the teaching practice, strengthen the teacher-student exchange and promote teachers and students together. We carried out the reform and explore positive and achieved the desired results.

  1. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    NASA Astrophysics Data System (ADS)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process and in sample creation.

  2. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  3. Non-rigid image registration using a statistical spline deformation model.

    PubMed

    Loeckx, Dirk; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul

    2003-07-01

    We propose a statistical spline deformation model (SSDM) as a method to solve non-rigid image registration. Within this model, the deformation is expressed using a statistically trained B-spline deformation mesh. The model is trained by principal component analysis of a training set. This approach allows to reduce the number of degrees of freedom needed for non-rigid registration by only retaining the most significant modes of variation observed in the training set. User-defined transformation components, like affine modes, are merged with the principal components into a unified framework. Optimization proceeds along the transformation components rather then along the individual spline coefficients. The concept of SSDM's is applied to the temporal registration of thorax CR-images using pattern intensity as the registration measure. Our results show that, using 30 training pairs, a reduction of 33% is possible in the number of degrees of freedom without deterioration of the result. The same accuracy as without SSDM's is still achieved after a reduction up to 66% of the degrees of freedom.

  4. A Comparison of Load-Velocity and Load-Power Relationships Between Well-Trained Young and Middle-Aged Males During Three Popular Resistance Exercises.

    PubMed

    Fernandes, John F T; Lamb, Kevin L; Twist, Craig

    2018-05-01

    Fernandes, JFT, Lamb, KL, and Twist, C. A comparison of load-velocity and load-power relationships between well-trained young and middle-aged males during 3 popular resistance exercises. J Strength Cond Res 32(5): 1440-1447, 2018-This study examined the load-velocity and load-power relationships among 20 young (age 21.0 ± 1.6 years) and 20 middle-aged (age 42.6 ± 6.7 years) resistance-trained males. Participants performed 3 repetitions of bench press, squat, and bent-over-row across a range of loads corresponding to 20-80% of 1 repetition maximum (1RM). Analysis revealed effects (p < 0.05) of group and load × group on barbell velocity for all 3 exercises, and interaction effects on power for squat and bent-over-row (p < 0.05). For bench press and bent-over-row, the young group produced higher barbell velocities, with the magnitude of the differences decreasing as load increased (ES; effect size 0.0-1.7 and 1.0-2.0, respectively). Squat velocity was higher in the young group than the middle-aged group (ES 1.0-1.7) across all loads, as was power for each exercise (ES 1.0-2.3). For all 3 exercises, both velocity and 1RM were correlated with optimal power in the middle-aged group (r = 0.613-0.825, p < 0.05), but only 1RM was correlated with optimal power (r = 0.708-0.867, p < 0.05) in the young group. These findings indicate that despite their resistance training, middle-aged males were unable to achieve velocities at low external loads and power outputs as high as the young males across a range of external resistances. Moreover, the strong correlations between 1RM and velocity with optimal power suggest that middle-aged males would benefit from training methods which maximize these adaptations.

  5. Application of Islanding Detection and Classification of Power Quality Disturbance in Hybrid Energy System

    NASA Astrophysics Data System (ADS)

    Sun, L. B.; Wu, Z. S.; Yang, K. K.

    2018-04-01

    Islanding and power quality (PQ) disturbances in hybrid energy system become more serious with the application of renewable energy sources. In this paper, a novel method based on wavelet transform (WT) and modified feed forward neural network (FNN) is proposed to detect islanding and classify PQ problems. First, the performance indices, i.e., the energy content and SD of the transformed signal are extracted from the negative sequence component of the voltage signal at PCC using WT. Afterward, WT indices are fed to train FNNs midfield by Particle Swarm Optimization (PSO) which is a novel heuristic optimization method. Then, the results of simulation based on WT-PSOFNN are discussed in MATLAB/SIMULINK. Simulations on the hybrid power system show that the accuracy can be significantly improved by the proposed method in detecting and classifying of different disturbances connected to multiple distributed generations.

  6. Breast Cancer Recognition Using a Novel Hybrid Intelligent Method

    PubMed Central

    Addeh, Jalil; Ebrahimzadeh, Ata

    2012-01-01

    Breast cancer is the second largest cause of cancer deaths among women. At the same time, it is also among the most curable cancer types if it can be diagnosed early. This paper presents a novel hybrid intelligent method for recognition of breast cancer tumors. The proposed method includes three main modules: the feature extraction module, the classifier module, and the optimization module. In the feature extraction module, fuzzy features are proposed as the efficient characteristic of the patterns. In the classifier module, because of the promising generalization capability of support vector machines (SVM), a SVM-based classifier is proposed. In support vector machine training, the hyperparameters have very important roles for its recognition accuracy. Therefore, in the optimization module, the bees algorithm (BA) is proposed for selecting appropriate parameters of the classifier. The proposed system is tested on Wisconsin Breast Cancer database and simulation results show that the recommended system has a high accuracy. PMID:23626945

  7. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. [Problem based learning by distance education and analysis of a training system].

    PubMed

    Dury, Cécile

    2004-12-01

    This article presents and analyses a training system aiming at acquiring skills in nursing cares. The aims followed are the development: --of an active pedagogic method: learning through problems (LTP); --of the interdisciplinary and intercultural approach, the same problems being solves by students from different disciplines and cultures; --of the use of the new technologies of information and communication (NTIC) so as to enable a maximal "distance" cooperation between the various partners of the project. The analysis of the system shows that the pedagogic aims followed by LTP are reached. The pluridisciplinary and pluricultural approach, to be optimal, requires great coordination between the partners, balance between the groups of students from different countries and disciplines, training and support from the tutors in the use of the distance teaching platform.

  9. Control of wavepacket dynamics in mixed alkali metal clusters by optimally shaped fs pulses

    NASA Astrophysics Data System (ADS)

    Bartelt, A.; Minemoto, S.; Lupulescu, C.; Vajda, Š.; Wöste, L.

    We have performed adaptive feedback optimization of phase-shaped femtosecond laser pulses to control the wavepacket dynamics of small mixed alkali-metal clusters. An optimization algorithm based on Evolutionary Strategies was used to maximize the ion intensities. The optimized pulses for NaK and Na2K converged to pulse trains consisting of numerous peaks. The timing of the elements of the pulse trains corresponds to integer and half integer numbers of the vibrational periods of the molecules, reflecting the wavepacket dynamics in their excited states.

  10. Views of physiatrists and physical therapists on the use of gait-training robots for stroke patients

    PubMed Central

    Kang, Chang Gu; Chun, Min Ho; Chang, Min Cheol; Kim, Won; Hee Do, Kyung

    2016-01-01

    [Purpose] Gait-training robots have been developed for stroke patients with gait disturbance. It is important to survey the views of physiatrists and physical therapists on the characteristics of these devices during their development. [Subjects and Methods] A total of 100 physiatrists and 100 physical therapists from 38 hospitals participated in our questionnaire survey. [Results] The most common answers about the merits of gait-training robots concern improving the treatment effects (28.5%), followed by standardizing treatment (19%), motivating patients about treatment (17%), and improving patients’ self-esteem (14%). The subacute period (1–3 months post-stroke onset) was most often chosen as the ideal period (47.3%) for the use of these devices, and a functional ambulation classification of 0–2 was the most selected response for the optimal patient status (27%). The preferred model was the treadmill type (47.5%) over the overground walking type (40%). The most favored commercial price was $50,000–$100,000 (38.3%). The most selected optimal duration for robot-assisted gait therapy was 30–45 min (47%), followed by 15–30 min (29%), 45–60 min (18%), ≥ 60 min (5%), and < 15 min (1%). [Conclusion] Our study findings could guide the future designs of more effective gait-training robots for stroke patients. PMID:26957758

  11. Piecewise convexity of artificial neural networks.

    PubMed

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Subsonic Aircraft With Regression and Neural-Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2004-01-01

    At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.

  13. A method for the analysis of perfluorinated compounds in environmental and drinking waters and the determination of their lowest concentration minimal reporting levels.

    PubMed

    Boone, J Scott; Guan, Bing; Vigo, Craig; Boone, Tripp; Byrne, Christian; Ferrario, Joseph

    2014-06-06

    A trace analytical method was developed for the determination of seventeen specific perfluorinated chemicals (PFCs) in environmental and drinking waters. The objectives were to optimize an isotope-dilution method to increase the precision and accuracy of the analysis of the PFCs and to eliminate the need for matrix-matched standards. A 250 mL sample of environmental or drinking water was buffered to a pH of 4, spiked with labeled surrogate standards, extracted through solid phase extraction cartridges, and eluted with ammonium hydroxide in methyl tert-butyl ether: methanol solution. The sample eluents were concentrated to volume and analyzed by liquid chromatography/tandem mass spectrometry (LC-MS/MS). The lowest concentration minimal reporting levels (LCMRLs) for the seventeen PFCs were calculated and ranged from 0.034 to 0.600 ng/L for surface water and from 0.033 to 0.640 ng/L for drinking water. The relative standard deviations (RSDs) for all compounds were <20% for all concentrations above the LCMRL. The method proved effective and cost efficient and addressed the problems with the recovery of perfluorobutanoic acid (PFBA) and other short chain PFCs. Various surface water and drinking water samples were used during method development to optimize this method. The method was used to evaluate samples from the Mississippi River at New Orleans and drinking water samples from a private residence in that same city. The method was also used to determine PFC contamination in well water samples from a fire training area where perfluorinated foams were used in training to extinguish fires. Published by Elsevier B.V.

  14. High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.

  15. A Review of Training Methods and Instructional Techniques: Implications for Behavioral Skills Training in U.S. Astronauts (DRAFT)

    NASA Technical Reports Server (NTRS)

    Hysong, Sylvia J.; Galarza, Laura; Holland, Albert W.

    2007-01-01

    Long-duration space missions (LDM) place unique physical, environmental and psychological demands on crewmembers that directly affect their ability to live and work in space. A growing body of research on crews working for extended periods in isolated, confined environments reveals the existence of psychological and performance problems in varying degrees of magnitude. The research has also demonstrated that although the environment plays a cathartic role, many of these problems are due to interpersonal frictions (Wood, Lugg, Hysong, & Harm, 1999), and affect each individual differently. Consequently, crewmembers often turn to maladaptive behaviors as coping mechanisms, resulting in decreased productivity and psychological discomfort. From this body of research, critical skills have been identified that can help a crewmember better navigate the psychological challenges of long duration space flight. Although most people lack several of these skills, most of them can be learned; thus, a training program can be designed to teach crewmembers effective leadership, teamwork, and self-care strategies that will help minimize the emergence of maladaptive behaviors. Thus, it is the purpose of this report is twofold: 1) To review the training literature to help determine the optimal instructional methods to use in delivering psychological skill training to the U.S. Astronaut Expedition Corps, and 2) To detail the structure and content of the proposed Astronaut Expedition Corps Psychological Training Program.

  16. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults

    PubMed Central

    Tait, Jamie L.; Duckham, Rachel L.; Milte, Catherine M.; Main, Luana C.; Daly, Robin M.

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people. PMID:29163146

  17. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  18. Counteracting moment device for reduction of earthquake-induced excursions of multi-level buildings.

    PubMed

    Nagaya, K; Fukushima, T; Kosugi, Y

    1999-05-01

    A vibration-control mechanism for beams and columns was presented in our previous report in which the earthquake force was transformed into a vibration-control force by using a gear train mechanism. In our previous report, however, only the principle of transforming the earthquake force into the control force was presented; the discussion for real structures and the design method were not presented. The present article provides a theoretical analysis of the column which is used in multi-layered buildings. Experimental tests were carried out for a model of multi-layered buildings in the frequency range of a principal earthquake wave. Theoretical results are compared to the experimental data. The optimal design of the control mechanism, which is of importance in the column design, is presented. Numerical calculations are carried out for the optimal design. It is shown that vibrations of the column involving the mechanism are suppressed remarkably. The optimal design method and the analytical results are applicable to the design of the column.

  19. Intelligent ensemble T-S fuzzy neural networks with RCDPSO_DM optimization for effective handling of complex clinical pathway variances.

    PubMed

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2013-07-01

    Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  1. [THE VIBRATION TRAINING AS SARCOPENIA INTERVENTION: IMPACT ON THE NEUROMUSCULAR SYSTEM OF THE ELDERLY].

    PubMed

    Palop Montoro, María Victoria; Párraga Montilla, Juan Antonio; Lozano Aguilera, Emilio; Arteaga Checa, Milagros

    2015-10-01

    aging is accompanied by a progressive reduction of muscle mass that contributes to the development of functional limitations, and where vibration training may be an option for optimal intervention in the prevention and treatment of sarcopenia. to assess the effectiveness of whole-body vibration in the neuromuscular system of the elderly. systematic review in Medline, CINAHL, WOS and PEDro data by combining the descriptors of Medical Subject Headings concerning vibration training, muscle strength, muscle mass and older adults. a total of 214 studies were found on the vibration training in older people as either the only intervention or in combination with other exercises, of which 45 met the selection criteria. Of these, 30 items were eliminated by not more than 5 points according to the PEDro scale. They were included 15 clinical trials for final analysis. WBV training proves to be a safe, adequate and effective strength training method in the elderly population, but results are similar to conventional resistance exercise in the prevention and treatment of sarcopenia. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  2. Artificial intelligence in sports on the example of weight training.

    PubMed

    Novatchkov, Hristo; Baca, Arnold

    2013-01-01

    The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key pointsArtificial intelligence is a promising field for sport-related analysis.Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements.Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates.

  3. Artificial Intelligence in Sports on the Example of Weight Training

    PubMed Central

    Novatchkov, Hristo; Baca, Arnold

    2013-01-01

    The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key points Artificial intelligence is a promising field for sport-related analysis. Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements. Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates. PMID:24149722

  4. Shape prior modeling using sparse representation and online dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.

  5. RBF neural network based PI pitch controller for a class of 5-MW wind turbines using particle swarm optimization algorithm.

    PubMed

    Poultangari, Iman; Shahnazi, Reza; Sheikhan, Mansour

    2012-09-01

    In order to control the pitch angle of blades in wind turbines, commonly the proportional and integral (PI) controller due to its simplicity and industrial usability is employed. The neural networks and evolutionary algorithms are tools that provide a suitable ground to determine the optimal PI gains. In this paper, a radial basis function (RBF) neural network based PI controller is proposed for collective pitch control (CPC) of a 5-MW wind turbine. In order to provide an optimal dataset to train the RBF neural network, particle swarm optimization (PSO) evolutionary algorithm is used. The proposed method does not need the complexities, nonlinearities and uncertainties of the system under control. The simulation results show that the proposed controller has satisfactory performance. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Application of da Vinci(®) Robot in simple or radical hysterectomy: Tips and tricks.

    PubMed

    Iavazzo, Christos; Gkegkes, Ioannis D

    2016-01-01

    The first robotic simple hysterectomy was performed more than 10 years ago. These days, robotic-assisted hysterectomy is accepted as an alternative surgical approach and is applied both in benign and malignant surgical entities. The two important points that should be taken into account to optimize postoperative outcomes in the early period of a surgeon's training are how to achieve optimal oncological and functional results. Overcoming any technical challenge, as with any innovative surgical method, leads to an improved surgical operation timewise as well as for patients' safety. The standardization of the technique and recognition of critical anatomical landmarks are essential for optimal oncological and clinical outcomes on both simple and radical robotic-assisted hysterectomy. Based on our experience, our intention is to present user-friendly tips and tricks to optimize the application of a da Vinci® robot in simple or radical hysterectomies.

  7. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  8. Training set optimization under population structure in genomic selection

    USDA-ARS?s Scientific Manuscript database

    The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...

  9. Physical activity and training in sarcoidosis: review and experience-based recommendations.

    PubMed

    Strookappe, Bert; Saketkoo, Lesley Ann; Elfferich, Marjon; Holland, Anne; De Vries, Jolanda; Knevel, Ton; Drent, Marjolein

    2016-10-01

    Sarcoidosis is a multisystemic inflammatory disorder with a great variety of symptoms, including fatigue, dyspnea, pain, reduced exercise tolerance and muscle strength. Physical training has the potential to improve exercise capacity and muscle strength, and reduce fatigue. The aim of this review and survey was to present information about the role of physical training in sarcoidosis and offer practical guidelines. A systematic literature review guided an international consensus effort among sarcoidosis experts to establish practice-basic recommendations for the implementation of exercise as treatment for patients with various manifestations of sarcoidosis. International sarcoidosis experts suggested considering physical training in symptomatic patients with sarcoidosis. Expert commentary: There is promising evidence of a positive effect of physical training. Recommendations were based on available data and expert consensus. However, the heterogeneity of these patients will require modification and program adjustment of the standard rehabilitation format for e.g. COPD or interstitial lung diseases. An optimal training program (types of exercise, intensities, frequency, duration) still needs to be defined to optimize training adjustments, especially reduction of fatigue. Further randomized controlled trials are needed to consolidate these findings and optimize the comprehensive care of sarcoidosis patients.

  10. Estimation of optimal educational cost per medical student.

    PubMed

    Yang, Eunbae B; Lee, Seunghee

    2009-09-01

    This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.

  11. Determining the optimal pelvic floor muscle training regimen for women with stress urinary incontinence.

    PubMed

    Dumoulin, Chantale; Glazener, Cathryn; Jenkinson, David

    2011-06-01

    Pelvic floor muscle (PFM) training has received Level-A evidence rating in the treatment of stress urinary incontinence (SUI) in women, based on meta-analysis of numerous randomized control trials (RCTs) and is recommended in many published guidelines. However, the actual regimen of PFM training used varies widely in these RCTs. Hence, to date, the optimal PFM training regimen for achieving continence remains unknown and the following questions persist: how often should women attend PFM training sessions and how many contractions should they perform for maximal effect? Is a regimen of strengthening exercises better than a motor control strategy or functional retraining? Is it better to administer a PFM training regimen to an individual or are group sessions equally effective, or better? Which is better, PFM training by itself or in combination with biofeedback, neuromuscular electrical stimulation, and/or vaginal cones? Should we use improvement or cure as the ultimate outcome to determine which regimen is the best? The questions are endless. As a starting point in our endeavour to identify optimal PFM training regimens, the aim of this study is (a) to review the present evidence in terms of the effectiveness of different PFM training regimens in women with SUI and (b) to discuss the current literature on PFM dysfunction in SUI women, including the up-to-date evidence on skeletal muscle training theory and other factors known to impact on women's participation in and adherence to PFM training. Copyright © 2011 Wiley-Liss, Inc.

  12. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  13. Real-time optimized biofeedback utilizing sport techniques (ROBUST): a study protocol for a randomized controlled trial.

    PubMed

    Taylor, Jeffrey B; Nguyen, Anh-Dung; Paterno, Mark V; Huang, Bin; Ford, Kevin R

    2017-02-07

    Anterior cruciate ligament (ACL) injuries in female athletes lead to a variety of short- and long-term physical, financial, and psychosocial ramifications. While dedicated injury prevention training programs have shown promise, ACL injury rates remain high as implementation has not become widespread. Conventional prevention programs use a combination of resistance, plyometric, balance and agility training to improve high-risk biomechanics and reduce the risk of injury. While many of these programs focus on reducing knee abduction load and posture during dynamic activity, targeting hip extensor strength and utilization may be more efficacious, as it is theorized to be an underlying mechanism of injury in adolescent female athletes. Biofeedback training may complement traditional preventive training, but has not been widely studied in connection with ACL injuries. We hypothesize that biofeedback may be needed to maximize the effectiveness of neuromuscular prophylactic interventions, and that hip-focused biofeedback will improve lower extremity biomechanics to a larger extent than knee-focused biofeedback during dynamic sport-specific tasks and long-term movement strategies. This is an assessor-blind, randomized control trial of 150 adolescent competitive female (9-19 years) soccer players. Each participant receives 3x/week neuromuscular preventive training and 1x/week biofeedback, the mode depending on their randomization to one of 3 biofeedback groups (hip-focused, knee-focused, sham). The primary aim is to assess the impact of biofeedback training on knee abduction moments (the primary biomechanical predictor of future ACL injury) during double-leg landings, single-leg landings, and unplanned cutting. Testing will occur immediately before the training intervention, immediately after the training intervention, and 6 months after the training intervention to assess the long-term retention of modified biomechanics. Secondary aims will assess performance changes, including hip and core strength, power, and agility, and the extent to which maturation effects biofeedback efficacy. The results of the Real-time Optimized Biofeedback Utilizing Sport Techniques (ROBUST) trial will help complement current preventive training and may lead to clinician-friendly methods of biofeedback to incorporate into widespread training practices. Date of publication in ClinicalTrials.gov: 20/04/2016. ClinicalTrials.gov Identifier: NCT02754700 .

  14. An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation

    PubMed Central

    Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie

    2014-01-01

    In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912

  15. Active relearning for robust supervised classification of pulmonary emphysema

    NASA Astrophysics Data System (ADS)

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Radiologists are adept at recognizing the appearance of lung parenchymal abnormalities in CT scans. However, the inconsistent differential diagnosis, due to subjective aggregation, mandates supervised classification. Towards optimizing Emphysema classification, we introduce a physician-in-the-loop feedback approach in order to minimize uncertainty in the selected training samples. Using multi-view inductive learning with the training samples, an ensemble of Support Vector Machine (SVM) models, each based on a specific pair-wise dissimilarity metric, was constructed in less than six seconds. In the active relearning phase, the ensemble-expert label conflicts were resolved by an expert. This just-in-time feedback with unoptimized SVMs yielded 15% increase in classification accuracy and 25% reduction in the number of support vectors. The generality of relearning was assessed in the optimized parameter space of six different classifiers across seven dissimilarity metrics. The resultant average accuracy improved to 21%. The co-operative feedback method proposed here could enhance both diagnostic and staging throughput efficiency in chest radiology practice.

  16. A Shift From Resilience to Human Performance Optimization in Special Operations Training: Advancements in Theory and Practice.

    PubMed

    Park, Gloria H; Messina, Lauren A; Deuster, Patricia A

    Within the Department of Defense over the past decade, a focus on enhancing Warfighter resilience and readiness has increased. For Special Operation Forces (SOF), who bear unique burdens for training and deployment, programs like the Preservation of the Force and Family have been created to help support SOF and their family members in sustaining capabilities and enhancing resilience in the face of prolonged warfare. In this review, we describe the shift in focus from resilience to human performance optimization (HPO) and the benefits of human performance initiatives that include holistic fitness. We then describe strategies for advancing the application of HPO for future initiatives through tailoring and cultural adaptation, as well as advancing methods for measurement. By striving toward specificity and precision performance, SOF human performance programs can impact individual and team capabilities to a greater extent than in the past, as well as maintaining the well-being of SOF and their families across their careers and beyond. 2017.

  17. Improving quantitative structure-activity relationship models using Artificial Neural Networks trained with dropout.

    PubMed

    Mendenhall, Jeffrey; Meiler, Jens

    2016-02-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.

  18. Improving Quantitative Structure-Activity Relationship Models using Artificial Neural Networks Trained with Dropout

    PubMed Central

    Mendenhall, Jeffrey; Meiler, Jens

    2016-01-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery (LB-CADD) pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both Enrichment false positive rate (FPR) and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22–46% over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods. PMID:26830599

  19. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  20. Communication skills in context: trends and perspectives.

    PubMed

    van Dalen, Jan

    2013-09-01

    Doctor-patient communication has been well researched. Less is known about the educational background of communication skills training. Do we aim for optimal performance of skills, or rather attempt to help students become skilled communicators? An overview is given of the current view on optimal doctor-patient communication. Next we focus on recent literature on how people acquire skills. These two topics are integrated in the next chapter, in which we discuss the optimal training conditions. A longitudinal training design has more lasting results than incidental training. Assessment must be in line with the intended learning outcomes. For transfer, doctor-patient communication must be addressed in all stages of health professions training. Elementary insights from medical education are far from realised in many medical schools. Doctor-patient communication would benefit strongly from more continuity in training and imbedding in the daily working contexts of doctors. When an educational continuum is realised and attention for doctor-patient communication is embedded in the working context of doctors in training the benefits will be strong. Training is only a part of the solution. In view of the current dissatisfaction with doctor-patient communication a change in attitude of course directors is strongly called for. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Optimized operation of dielectric laser accelerators: Multibunch

    NASA Astrophysics Data System (ADS)

    Hanuka, Adi; Schächter, Levi

    2018-06-01

    We present a self-consistent analysis to determine the optimal charge, gradient, and efficiency for laser driven accelerators operating with a train of microbunches. Specifically, we account for the beam loading reduction on the material occurring at the dielectric-vacuum interface. In the case of a train of microbunches, such beam loading effect could be detrimental due to energy spread, however this may be compensated by a tapered laser pulse. We ultimately propose an optimization procedure with an analytical solution for group velocity which equals to half the speed of light. This optimization results in a maximum efficiency 20% lower than the single bunch case, and a total accelerated charge of 1 06 electrons in the train. The approach holds promise for improving operations of dielectric laser accelerators and may have an impact on emerging laser accelerators driven by high-power optical lasers.

  2. Bioinspired Technologies to Connect Musculoskeletal Mechanobiology to the Person for Training and Rehabilitation

    PubMed Central

    Pizzolato, Claudio; Lloyd, David G.; Barrett, Rod S.; Cook, Jill L.; Zheng, Ming H.; Besier, Thor F.; Saxby, David J.

    2017-01-01

    Musculoskeletal tissues respond to optimal mechanical signals (e.g., strains) through anabolic adaptations, while mechanical signals above and below optimal levels cause tissue catabolism. If an individual's physical behavior could be altered to generate optimal mechanical signaling to musculoskeletal tissues, then targeted strengthening and/or repair would be possible. We propose new bioinspired technologies to provide real-time biofeedback of relevant mechanical signals to guide training and rehabilitation. In this review we provide a description of how wearable devices may be used in conjunction with computational rigid-body and continuum models of musculoskeletal tissues to produce real-time estimates of localized tissue stresses and strains. It is proposed that these bioinspired technologies will facilitate a new approach to physical training that promotes tissue strengthening and/or repair through optimal tissue loading. PMID:29093676

  3. Optimization of Artificial Neural Network using Evolutionary Programming for Prediction of Cascading Collapse Occurrence due to the Hidden Failure Effect

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.

    2018-03-01

    This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).

  4. Ensemble Learning Method for Hidden Markov Models

    DTIC Science & Technology

    2014-12-01

    Ensemble HMM landmine detector Mine signatures vary according to the mine type, mine size , and burial depth. Similarly, clutter signatures vary with soil ...approaches for the di erent K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum...propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we

  5. Performance Enhancing Diets and the PRISE Protocol to Optimize Athletic Performance

    PubMed Central

    Arciero, Paul J.; Ward, Emery

    2015-01-01

    The training regimens of modern-day athletes have evolved from the sole emphasis on a single fitness component (e.g., endurance athlete or resistance/strength athlete) to an integrative, multimode approach encompassing all four of the major fitness components: resistance (R), interval sprints (I), stretching (S), and endurance (E) training. Athletes rarely, if ever, focus their training on only one mode of exercise but instead routinely engage in a multimode training program. In addition, timed-daily protein (P) intake has become a hallmark for all athletes. Recent studies, including from our laboratory, have validated the effectiveness of this multimode paradigm (RISE) and protein-feeding regimen, which we have collectively termed PRISE. Unfortunately, sports nutrition recommendations and guidelines have lagged behind the PRISE integrative nutrition and training model and therefore limit an athletes' ability to succeed. Thus, it is the purpose of this review to provide a clearly defined roadmap linking specific performance enhancing diets (PEDs) with each PRISE component to facilitate optimal nourishment and ultimately optimal athletic performance. PMID:25949823

  6. Neuroprosthetic Decoder Training as Imitation Learning

    PubMed Central

    Merel, Josh; Paninski, Liam; Cunningham, John P.

    2016-01-01

    Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user’s intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user’s intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector. PMID:27191387

  7. Optimal Design of the Absolute Positioning Sensor for a High-Speed Maglev Train and Research on Its Fault Diagnosis

    PubMed Central

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project. PMID:23112619

  8. User-customized brain computer interfaces using Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali

    2016-04-01

    Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.

  9. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization.

    PubMed

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.

  10. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization

    PubMed Central

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499

  11. Training nurses in task-shifting strategies for the management and control of hypertension in Ghana: a mixed-methods study.

    PubMed

    Gyamfi, Joyce; Plange-Rhule, Jacob; Iwelunmor, Juliet; Lee, Debbie; Blackstone, Sarah R; Mitchell, Alicia; Ntim, Michael; Apusiga, Kingsley; Tayo, Bamidele; Yeboah-Awudzi, Kwasi; Cooper, Richard; Ogedegbe, Gbenga

    2017-02-02

    Nurses in Ghana play a vital role in the delivery of primary health care at both the household and community level. However, there is lack of information on task shifting the management and control of hypertension to community health nurses in low- and middle-income countries including Ghana. The purpose of this study was to assess nurses' knowledge and practice of hypertension management and control pre- and post-training utilizing task-shifting strategies for hypertension control in Ghana (TASSH). A pre- and post- test survey was administered to 64 community health nurses (CHNs) and enrolled nurses (ENs) employed in community health centers and district hospitals before and after the TASSH training, followed by semi-structured qualitative interviews that assessed nurses' satisfaction with the training, resultant changes in practice and barriers and facilitators to optimal hypertension management. A total of 64 CHNs and ENs participated in the TASSH training. The findings of the pre- and post-training assessments showed a marked improvement in nurses' knowledge and practice related to hypertension detection and treatment. At pre-assessment 26.9% of the nurses scored 80% or more on the hypertension knowledge test, whereas this improved significantly to 95.7% post-training. Improvement of interpersonal skills and patient education were also mentioned by the nurses as positive outcomes of participation in the intervention. Findings suggest that if all nurses receive even brief training in the management and control of hypertension, major public health benefits are likely to be achieved in low-income countries like Ghana. However, more research is needed to ascertain implementation fidelity and sustainability of interventions such as TASSH that highlight the potential role of nurses in mitigating barriers to optimal hypertension control in Ghana. Trial registration for parent TASSH study: NCT01802372 . Registered February 27, 2013.

  12. Determination of the mechanical and physical properties of cartilage by coupling poroelastic-based finite element models of indentation with artificial neural networks.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Campoli, Gianni; Weinans, Harrie; Zadpoor, Amir A

    2016-03-21

    One of the most widely used techniques to determine the mechanical properties of cartilage is based on indentation tests and interpretation of the obtained force-time or displacement-time data. In the current computational approaches, one needs to simulate the indentation test with finite element models and use an optimization algorithm to estimate the mechanical properties of cartilage. The modeling procedure is cumbersome, and the simulations need to be repeated for every new experiment. For the first time, we propose a method for fast and accurate estimation of the mechanical and physical properties of cartilage as a poroelastic material with the aid of artificial neural networks. In our study, we used finite element models to simulate the indentation for poroelastic materials with wide combinations of mechanical and physical properties. The obtained force-time curves are then divided into three parts: the first two parts of the data is used for training and validation of an artificial neural network, while the third part is used for testing the trained network. The trained neural network receives the force-time curves as the input and provides the properties of cartilage as the output. We observed that the trained network could accurately predict the properties of cartilage within the range of properties for which it was trained. The mechanical and physical properties of cartilage could therefore be estimated very fast, since no additional finite element modeling is required once the neural network is trained. The robustness of the trained artificial neural network in determining the properties of cartilage based on noisy force-time data was assessed by introducing noise to the simulated force-time data. We found that the training procedure could be optimized so as to maximize the robustness of the neural network against noisy force-time data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Fully automatic time-window selection using machine learning for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Lei, W.; Lefebvre, M. P.; Bozdag, E.; Komatitsch, D.; Tromp, J.

    2017-12-01

    Selecting time windows from seismograms such that the synthetic measurements (from simulations) and measured observations are sufficiently close is indispensable in a global adjoint tomography framework. The increasing amount of seismic data collected everyday around the world demands "intelligent" algorithms for seismic window selection. While the traditional FLEXWIN algorithm can be "automatic" to some extent, it still requires both human input and human knowledge or experience, and thus is not deemed to be fully automatic. The goal of intelligent window selection is to automatically select windows based on a learnt engine that is built upon a huge number of existing windows generated through the adjoint tomography project. We have formulated the automatic window selection problem as a classification problem. All possible misfit calculation windows are classified as either usable or unusable. Given a large number of windows with a known selection mode (select or not select), we train a neural network to predict the selection mode of an arbitrary input window. Currently, the five features we extract from the windows are its cross-correlation value, cross-correlation time lag, amplitude ratio between observed and synthetic data, window length, and minimum STA/LTA value. More features can be included in the future. We use these features to characterize each window for training a multilayer perceptron neural network (MPNN). Training the MPNN is equivalent to solve a non-linear optimization problem. We use backward propagation to derive the gradient of the loss function with respect to the weighting matrices and bias vectors and use the mini-batch stochastic gradient method to iteratively optimize the MPNN. Numerical tests show that with a careful selection of the training data and a sufficient amount of training data, we are able to train a robust neural network that is capable of detecting the waveforms in an arbitrary earthquake data with negligible detection error compared to existing selection methods (e.g. FLEXWIN). We will introduce in detail the mathematical formulation of the window-selection-oriented MPNN and show very encouraging results when applying the new algorithm to real earthquake data.

  14. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  15. A Minimum Spanning Forest Based Method for Noninvasive Cancer Detection with Hyperspectral Imaging

    PubMed Central

    Pike, Robert; Lu, Guolan; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2016-01-01

    Goal The purpose of this paper is to develop a classification method that combines both spectral and spatial information for distinguishing cancer from healthy tissue on hyperspectral images in an animal model. Methods An automated algorithm based on a minimum spanning forest (MSF) and optimal band selection has been proposed to classify healthy and cancerous tissue on hyperspectral images. A support vector machine (SVM) classifier is trained to create a pixel-wise classification probability map of cancerous and healthy tissue. This map is then used to identify markers that are used to compute mutual information for a range of bands in the hyperspectral image and thus select the optimal bands. An MSF is finally grown to segment the image using spatial and spectral information. Conclusion The MSF based method with automatically selected bands proved to be accurate in determining the tumor boundary on hyperspectral images. Significance Hyperspectral imaging combined with the proposed classification technique has the potential to provide a noninvasive tool for cancer detection. PMID:26285052

  16. The right time to learn: mechanisms and optimization of spaced learning

    PubMed Central

    Smolen, Paul; Zhang, Yili; Byrne, John H.

    2016-01-01

    For many types of learning, spaced training, which involves repeated long inter-trial intervals, leads to more robust memory formation than does massed training, which involves short or no intervals. Several cognitive theories have been proposed to explain this superiority, but only recently have data begun to delineate the underlying cellular and molecular mechanisms of spaced training, and we review these theories and data here. Computational models of the implicated signalling cascades have predicted that spaced training with irregular inter-trial intervals can enhance learning. This strategy of using models to predict optimal spaced training protocols, combined with pharmacotherapy, suggests novel ways to rescue impaired synaptic plasticity and learning. PMID:26806627

  17. Swarm intelligence metaheuristics for enhanced data analysis and optimization.

    PubMed

    Hanrahan, Grady

    2011-09-21

    The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.

  18. Biofeedback for robotic gait rehabilitation.

    PubMed

    Lünenburger, Lars; Colombo, Gery; Riener, Robert

    2007-01-23

    Development and increasing acceptance of rehabilitation robots as well as advances in technology allow new forms of therapy for patients with neurological disorders. Robot-assisted gait therapy can increase the training duration and the intensity for the patients while reducing the physical strain for the therapist. Optimal training effects during gait therapy generally depend on appropriate feedback about performance. Compared to manual treadmill therapy, there is a loss of physical interaction between therapist and patient with robotic gait retraining. Thus, it is difficult for the therapist to assess the necessary feedback and instructions. The aim of this study was to define a biofeedback system for a gait training robot and test its usability in subjects without neurological disorders. To provide an overview of biofeedback and motivation methods applied in gait rehabilitation, previous publications and results from our own research are reviewed. A biofeedback method is presented showing how a rehabilitation robot can assess the patients' performance and deliver augmented feedback. For validation, three subjects without neurological disorders walked in a rehabilitation robot for treadmill training. Several training parameters, such as body weight support and treadmill speed, were varied to assess the robustness of the biofeedback calculation to confounding factors. The biofeedback values correlated well with the different activity levels of the subjects. Changes in body weight support and treadmill velocity had a minor effect on the biofeedback values. The synchronization of the robot and the treadmill affected the biofeedback values describing the stance phase. Robot-aided assessment and feedback can extend and improve robot-aided training devices. The presented method estimates the patients' gait performance with the use of the robot's existing sensors, and displays the resulting biofeedback values to the patients and therapists. The therapists can adapt the therapy and give further instructions to the patients. The feedback might help the patients to adapt their movement patterns and to improve their motivation. While it is assumed that these novel methods also improve training efficacy, the proof will only be possible with future in-depth clinical studies.

  19. Radial basis function network learns ceramic processing and predicts related strength and density

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.

    1993-01-01

    Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.

  20. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  1. TargetM6A: Identifying N6-Methyladenosine Sites From RNA Sequences via Position-Specific Nucleotide Propensities and a Support Vector Machine.

    PubMed

    Li, Guang-Qing; Liu, Zi; Shen, Hong-Bin; Yu, Dong-Jun

    2016-10-01

    As one of the most ubiquitous post-transcriptional modifications of RNA, N 6 -methyladenosine ( [Formula: see text]) plays an essential role in many vital biological processes. The identification of [Formula: see text] sites in RNAs is significantly important for both basic biomedical research and practical drug development. In this study, we designed a computational-based method, called TargetM6A, to rapidly and accurately target [Formula: see text] sites solely from the primary RNA sequences. Two new features, i.e., position-specific nucleotide/dinucleotide propensities (PSNP/PSDP), are introduced and combined with the traditional nucleotide composition (NC) feature to formulate RNA sequences. The extracted features are further optimized to obtain a much more compact and discriminative feature subset by applying an incremental feature selection (IFS) procedure. Based on the optimized feature subset, we trained TargetM6A on the training dataset with a support vector machine (SVM) as the prediction engine. We compared the proposed TargetM6A method with existing methods for predicting [Formula: see text] sites by performing stringent jackknife tests and independent validation tests on benchmark datasets. The experimental results show that the proposed TargetM6A method outperformed the existing methods for predicting [Formula: see text] sites and remarkably improved the prediction performances, with MCC = 0.526 and AUC = 0.818. We also provided a user-friendly web server for TargetM6A, which is publicly accessible for academic use at http://csbio.njust.edu.cn/bioinf/TargetM6A.

  2. Quality of dispatch-assisted cardiopulmonary resuscitation by lay rescuers following a standard protocol in Japan: an observational simulation study.

    PubMed

    Asai, Hideki; Fukushima, Hidetada; Bolstad, Francesco; Okuchi, Kazuo

    2018-04-01

    Bystander cardiopulmonary resuscitation (CPR) is essential for improving the outcomes of sudden cardiac arrest patients. It has been reported that dispatch-assisted CPR (DACPR) accounts for more than half of the incidence of CPR undertaken by bystanders. Its quality, however, can be suboptimal. We aimed to measure the quality of DACPR using a simulation study. We recruited laypersons at a shopping mall and measured the quality of CPR carried out in our simulation. Dispatchers provided instruction in accordance with the standard DACPR protocol in Japan. Twenty-three laypersons (13 with CPR training experience within the past 2 years and 10 with no training experience) participated in this study. The median chest compression rate and depth were 106/min and 33 mm, respectively. The median time interval from placing the 119 call to the start of chest compressions was 119 s. No significant difference was found between the groups with and without training experience. However, subjects with training experience more frequently placed their hands correctly on the manikin (84.6% versus 40.0%; P = 0.026). Twelve participants (52.2%, seven in trained and five in untrained group) interrupted chest compressions for 3-18 s, because dispatchers asked if the patient started breathing or moving. This current simulation study showed that the quality of DACPR carried out by lay rescuers can be less than optimal in terms of depth, hand placement, and minimization of pauses. Further studies are required to explore better DACPR instruction methods to help lay rescuers perform CPR with optimal quality.

  3. Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2006-01-01

    Genetic and evolutionary algorithms have been applied to solve numerous problems in engineering design where they have been used primarily as optimization procedures. These methods have an advantage over conventional gradient-based search procedures became they are capable of finding global optima of multi-modal functions and searching design spaces with disjoint feasible regions. They are also robust in the presence of noisy data. Another desirable feature of these methods is that they can efficiently use distributed and parallel computing resources since multiple function evaluations (flow simulations in aerodynamics design) can be performed simultaneously and independently on ultiple processors. For these reasons genetic and evolutionary algorithms are being used more frequently in design optimization. Examples include airfoil and wing design and compressor and turbine airfoil design. They are also finding increasing use in multiple-objective and multidisciplinary optimization. This lecture will focus on an evolutionary method that is a relatively new member to the general class of evolutionary methods called differential evolution (DE). This method is easy to use and program and it requires relatively few user-specified constants. These constants are easily determined for a wide class of problems. Fine-tuning the constants will off course yield the solution to the optimization problem at hand more rapidly. DE can be efficiently implemented on parallel computers and can be used for continuous, discrete and mixed discrete/continuous optimization problems. It does not require the objective function to be continuous and is noise tolerant. DE and applications to single and multiple-objective optimization will be included in the presentation and lecture notes. A method for aerodynamic design optimization that is based on neural networks will also be included as a part of this lecture. The method offers advantages over traditional optimization methods. It is more flexible than other methods in dealing with design in the context of both steady and unsteady flows, partial and complete data sets, combined experimental and numerical data, inclusion of various constraints and rules of thumb, and other issues that characterize the aerodynamic design process. Neural networks provide a natural framework within which a succession of numerical solutions of increasing fidelity, incorporating more realistic flow physics, can be represented and utilized for optimization. Neural networks also offer an excellent framework for multiple-objective and multi-disciplinary design optimization. Simulation tools from various disciplines can be integrated within this framework and rapid trade-off studies involving one or many disciplines can be performed. The prospect of combining neural network based optimization methods and evolutionary algorithms to obtain a hybrid method with the best properties of both methods will be included in this presentation. Achieving solution diversity and accurate convergence to the exact Pareto front in multiple objective optimization usually requires a significant computational effort with evolutionary algorithms. In this lecture we will also explore the possibility of using neural networks to obtain estimates of the Pareto optimal front using non-dominated solutions generated by DE as training data. Neural network estimators have the potential advantage of reducing the number of function evaluations required to obtain solution accuracy and diversity, thus reducing cost to design.

  4. Rescheduling/timetable optimization of trains along the U.S. shared-use corridors : development of the hybrid optimization of train schedules (HOTS) model.

    DOT National Transportation Integrated Search

    2016-11-23

    A growing demand for passenger and freight transportation, combined with limited capital to expand : the United States (U.S.) rail infrastructure, are creating pressure for a more efficient use of the current : line capacity. This is further exacerba...

  5. Authentic Leadership for Teacher's Academic Optimism: Moderating Effect of Training Comprehensiveness

    ERIC Educational Resources Information Center

    Srivastava, Anugamini Priya; Dhar, Rajib Lochan

    2016-01-01

    Purpose: This study aims to analyse the impact of authentic leadership (AL) on academic optimism (AO) through the mediating role of affective commitment (AC). As this study also examines the moderating role of training comprehensiveness (TC) in strengthening the relation between AC and AO. Design/methodology/approach: Data were collected from…

  6. Optimal retraining time for regaining functional fitness using multicomponent training after long-term detraining in older adults.

    PubMed

    Lee, Minyoung; Lim, Taehyun; Lee, Jaehyuk; Kim, Kimyeong; Yoon, BumChul

    2017-11-01

    Little is known about the optimal retraining time for regaining functional fitness through multicomponent training following long-term detraining in older adults. This study first investigated the time course of functional fitness changes during 12-month multicomponent training, 12-month detraining, and 9-month retraining in 18 older adults (68.33±3.46) and then determined the optimal retraining time for regaining the post-training functional fitness level after a 12-month detraining period. Functional fitness, including lower and upper limb strength, lower and upper limb flexibility, aerobic endurance, and dynamic balance, was assessed at baseline, 12 months post-training, 12 months post-detraining, and 3, 6, and 9 months post-retraining. There were significant increases in all of the functional fitness components except upper limb flexibility at post-training and no significant decreases at post-detraining. For lower and upper limb strength and lower limb flexibility, a 3-month period was required to regain the post-training condition. For aerobic endurance and dynamic balance, a retraining period ≥9months was necessary to regain the post-training functional fitness condition. To regain the post-training condition of all functional fitness components, a retraining period ≥9months was required. This information might be useful for health professionals to encourage older adults not to interrupt retraining until they regain their post-training functional fitness condition. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network

    NASA Astrophysics Data System (ADS)

    Singh, U. K.; Tiwari, R. K.; Singh, S. B.

    2010-02-01

    The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.

  8. Bearing fault diagnosis using a whale optimization algorithm-optimized orthogonal matching pursuit with a combined time-frequency atom dictionary

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-07-01

    Condition monitoring and fault diagnosis of rolling element bearings are significant to guarantee the reliability and functionality of a mechanical system, production efficiency, and plant safety. However, this is almost invariably a formidable challenge because the fault features are often buried by strong background noises and other unstable interference components. To satisfactorily extract the bearing fault features, a whale optimization algorithm (WOA)-optimized orthogonal matching pursuit (OMP) with a combined time-frequency atom dictionary is proposed in this paper. Firstly, a combined time-frequency atom dictionary whose atom is a combination of Fourier dictionary atom and impact time-frequency dictionary atom is designed according to the properties of bearing fault vibration signal. Furthermore, to improve the efficiency and accuracy of signal sparse representation, the WOA is introduced into the OMP algorithm to optimize the atom parameters for best approximating the original signal with the dictionary atoms. The proposed method is validated through analyzing the bearing fault simulation signal and the real vibration signals collected from an experimental bearing and a wheelset bearing of high-speed trains. The comparisons with the respect to the state of the art in the field are illustrated in detail, which highlight the advantages of the proposed method.

  9. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  10. Optimal Loading for Maximizing Power During Sled-Resisted Sprinting.

    PubMed

    Cross, Matt R; Brughelli, Matt; Samozino, Pierre; Brown, Scott R; Morin, Jean-Benoit

    2017-09-01

    To ascertain whether force-velocity-power relationships could be compiled from a battery of sled-resisted overground sprints and to clarify and compare the optimal loading conditions for maximizing power production for different athlete cohorts. Recreational mixed-sport athletes (n = 12) and sprinters (n = 15) performed multiple trials of maximal sprints unloaded and towing a selection of sled masses (20-120% body mass [BM]). Velocity data were collected by sports radar, and kinetics at peak velocity were quantified using friction coefficients and aerodynamic drag. Individual force-velocity and power-velocity relationships were generated using linear and quadratic relationships, respectively. Mechanical and optimal loading variables were subsequently calculated and test-retest reliability assessed. Individual force-velocity and power-velocity relationships were accurately fitted with regression models (R 2 > .977, P < .001) and were reliable (ES = 0.05-0.50, ICC = .73-.97, CV = 1.0-5.4%). The normal loading that maximized peak power was 78% ± 6% and 82% ± 8% of BM, representing a resistance of 3.37 and 3.62 N/kg at 4.19 ± 0.19 and 4.90 ± 0.18 m/s (recreational athletes and sprinters, respectively). Optimal force and normal load did not clearly differentiate between cohorts, although sprinters developed greater maximal power (17.2-26.5%, ES = 0.97-2.13, P < .02) at much greater velocities (16.9%, ES = 3.73, P < .001). Mechanical relationships can be accurately profiled using common sled-training equipment. Notably, the optimal loading conditions determined in this study (69-96% of BM, dependent on friction conditions) represent much greater resistance than current guidelines (~7-20% of BM). This method has potential value in quantifying individualized training parameters for optimized development of horizontal power.

  11. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  12. Comparison of Non-Invasive Individual Monitoring of the Training and Health of Athletes with Commercially Available Wearable Technologies

    PubMed Central

    Düking, Peter; Hotho, Andreas; Holmberg, Hans-Christer; Fuss, Franz Konstantin; Sperlich, Billy

    2016-01-01

    Athletes adapt their training daily to optimize performance, as well as avoid fatigue, overtraining and other undesirable effects on their health. To optimize training load, each athlete must take his/her own personal objective and subjective characteristics into consideration and an increasing number of wearable technologies (wearables) provide convenient monitoring of various parameters. Accordingly, it is important to help athletes decide which parameters are of primary interest and which wearables can monitor these parameters most effectively. Here, we discuss the wearable technologies available for non-invasive monitoring of various parameters concerning an athlete's training and health. On the basis of these considerations, we suggest directions for future development. Furthermore, we propose that a combination of several wearables is most effective for accessing all relevant parameters, disturbing the athlete as little as possible, and optimizing performance and promoting health. PMID:27014077

  13. [Application of optimized parameters SVM based on photoacoustic spectroscopy method in fault diagnosis of power transformer].

    PubMed

    Zhang, Yu-xin; Cheng, Zhi-feng; Xu, Zheng-ping; Bai, Jing

    2015-01-01

    In order to solve the problems such as complex operation, consumption for the carrier gas and long test period in traditional power transformer fault diagnosis approach based on dissolved gas analysis (DGA), this paper proposes a new method which is detecting 5 types of characteristic gas content in transformer oil such as CH4, C2H2, C2H4, C2H6 and H2 based on photoacoustic Spectroscopy and C2H2/C2H4, CH4/H2, C2H4/C2H6 three-ratios data are calculated. The support vector machine model was constructed using cross validation method under five support vector machine functions and four kernel functions, heuristic algorithms were used in parameter optimization for penalty factor c and g, which to establish the best SVM model for the highest fault diagnosis accuracy and the fast computing speed. Particles swarm optimization and genetic algorithm two types of heuristic algorithms were comparative studied in this paper for accuracy and speed in optimization. The simulation result shows that SVM model composed of C-SVC, RBF kernel functions and genetic algorithm obtain 97. 5% accuracy in test sample set and 98. 333 3% accuracy in train sample set, and genetic algorithm was about two times faster than particles swarm optimization in computing speed. The methods described in this paper has many advantages such as simple operation, non-contact measurement, no consumption for the carrier gas, long test period, high stability and sensitivity, the result shows that the methods described in this paper can instead of the traditional transformer fault diagnosis by gas chromatography and meets the actual project needs in transformer fault diagnosis.

  14. Optimization of DRASTIC method by supervised committee machine artificial intelligence to assess groundwater vulnerability for Maragheh-Bonab plain aquifer, Iran

    NASA Astrophysics Data System (ADS)

    Fijani, Elham; Nadiri, Ata Allah; Asghari Moghaddam, Asghar; Tsai, Frank T.-C.; Dixon, Barnali

    2013-10-01

    Contamination of wells with nitrate-N (NO3-N) poses various threats to human health. Contamination of groundwater is a complex process and full of uncertainty in regional scale. Development of an integrative vulnerability assessment methodology can be useful to effectively manage (including prioritization of limited resource allocation to monitor high risk areas) and protect this valuable freshwater source. This study introduces a supervised committee machine with artificial intelligence (SCMAI) model to improve the DRASTIC method for groundwater vulnerability assessment for the Maragheh-Bonab plain aquifer in Iran. Four different AI models are considered in the SCMAI model, whose input is the DRASTIC parameters. The SCMAI model improves the committee machine artificial intelligence (CMAI) model by replacing the linear combination in the CMAI with a nonlinear supervised ANN framework. To calibrate the AI models, NO3-N concentration data are divided in two datasets for the training and validation purposes. The target value of the AI models in the training step is the corrected vulnerability indices that relate to the first NO3-N concentration dataset. After model training, the AI models are verified by the second NO3-N concentration dataset. The results show that the four AI models are able to improve the DRASTIC method. Since the best AI model performance is not dominant, the SCMAI model is considered to combine the advantages of individual AI models to achieve the optimal performance. The SCMAI method re-predicts the groundwater vulnerability based on the different AI model prediction values. The results show that the SCMAI outperforms individual AI models and committee machine with artificial intelligence (CMAI) model. The SCMAI model ensures that no water well with high NO3-N levels would be classified as low risk and vice versa. The study concludes that the SCMAI model is an effective model to improve the DRASTIC model and provides a confident estimate of the pollution risk.

  15. Simulation as a surgical teaching model.

    PubMed

    Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos

    2018-01-01

    Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. An adaptive critic-based scheme for consensus control of nonlinear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Heydari, Ali; Balakrishnan, S. N.

    2014-12-01

    The problem of decentralised consensus control of a network of heterogeneous nonlinear systems is formulated as an optimal tracking problem and a solution is proposed using an approximate dynamic programming based neurocontroller. The neurocontroller training comprises an initial offline training phase and an online re-optimisation phase to account for the fact that the reference signal subject to tracking is not fully known and available ahead of time, i.e., during the offline training phase. As long as the dynamics of the agents are controllable, and the communication graph has a directed spanning tree, this scheme guarantees the synchronisation/consensus even under switching communication topology and directed communication graph. Finally, an aerospace application is selected for the evaluation of the performance of the method. Simulation results demonstrate the potential of the scheme.

  17. A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy

    PubMed Central

    Wen, Hui; Xie, Weixin; Pei, Jihong

    2016-01-01

    This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737

  18. A novel constructive-optimizer neural network for the traveling salesman problem.

    PubMed

    Saadatmand-Tarzjan, Mahdi; Khademi, Morteza; Akbarzadeh-T, Mohammad-R; Moghaddam, Hamid Abrishami

    2007-08-01

    In this paper, a novel constructive-optimizer neural network (CONN) is proposed for the traveling salesman problem (TSP). CONN uses a feedback structure similar to Hopfield-type neural networks and a competitive training algorithm similar to the Kohonen-type self-organizing maps (K-SOMs). Consequently, CONN is composed of a constructive part, which grows the tour and an optimizer part to optimize it. In the training algorithm, an initial tour is created first and introduced to CONN. Then, it is trained in the constructive phase for adding a number of cities to the tour. Next, the training algorithm switches to the optimizer phase for optimizing the current tour by displacing the tour cities. After convergence in this phase, the training algorithm switches to the constructive phase anew and is continued until all cities are added to the tour. Furthermore, we investigate a relationship between the number of TSP cities and the number of cities to be added in each constructive phase. CONN was tested on nine sets of benchmark TSPs from TSPLIB to demonstrate its performance and efficiency. It performed better than several typical Neural networks (NNs), including KNIES_TSP_Local, KNIES_TSP_Global, Budinich's SOM, Co-Adaptive Net, and multivalued Hopfield network as wall as computationally comparable variants of the simulated annealing algorithm, in terms of both CPU time and accuracy. Furthermore, CONN converged considerably faster than expanding SOM and evolved integrated SOM and generated shorter tours compared to KNIES_DECOMPOSE. Although CONN is not yet comparable in terms of accuracy with some sophisticated computationally intensive algorithms, it converges significantly faster than they do. Generally speaking, CONN provides the best compromise between CPU time and accuracy among currently reported NNs for TSP.

  19. Design and comparative analysis of 10 MW class superconducting wind power generators according to different types of superconducting wires

    NASA Astrophysics Data System (ADS)

    Sung, Hae-Jin; Kim, Gyeong-Hun; Kim, Kwangmin; Park, Minwon; Yu, In-Keun; Kim, Jong-Yul

    2013-11-01

    Wind turbine concepts can be classified into the geared type and the gearless type. The gearless type wind turbine is more attractive due to advantages of simplified drive train and increased energy yield, and higher reliability because the gearbox is omitted. In addition, this type resolves the weight issue of the wind turbine with the light weight of gearbox. However, because of the low speed operation, this type has disadvantage such as the large diameter and heavy weight of generator. Super-Conducting (SC) wind power generator can reduce the weight and volume of a wind power system. Properties of superconducting wire are very different from each company. This paper considers the design and comparative analysis of 10 MW class SC wind power generators according to different types of SC wires. Super-Conducting Synchronous Generators (SCSGs) using YBCO and Bi-2223 wires are optimized by an optimal method. The magnetic characteristics of the SCSGs are investigated using the finite elements method program. The optimized specifications of the SCSGs are discussed in detail, and the optimization processes can be used effectively to develop large scale wind power generation systems.

  20. Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain

    NASA Astrophysics Data System (ADS)

    Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida

    2013-04-01

    Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.

  1. Principal component reconstruction (PCR) for cine CBCT with motion learning from 2D fluoroscopy.

    PubMed

    Gao, Hao; Zhang, Yawei; Ren, Lei; Yin, Fang-Fang

    2018-01-01

    This work aims to generate cine CT images (i.e., 4D images with high-temporal resolution) based on a novel principal component reconstruction (PCR) technique with motion learning from 2D fluoroscopic training images. In the proposed PCR method, the matrix factorization is utilized as an explicit low-rank regularization of 4D images that are represented as a product of spatial principal components and temporal motion coefficients. The key hypothesis of PCR is that temporal coefficients from 4D images can be reasonably approximated by temporal coefficients learned from 2D fluoroscopic training projections. For this purpose, we can acquire fluoroscopic training projections for a few breathing periods at fixed gantry angles that are free from geometric distortion due to gantry rotation, that is, fluoroscopy-based motion learning. Such training projections can provide an effective characterization of the breathing motion. The temporal coefficients can be extracted from these training projections and used as priors for PCR, even though principal components from training projections are certainly not the same for these 4D images to be reconstructed. For this purpose, training data are synchronized with reconstruction data using identical real-time breathing position intervals for projection binning. In terms of image reconstruction, with a priori temporal coefficients, the data fidelity for PCR changes from nonlinear to linear, and consequently, the PCR method is robust and can be solved efficiently. PCR is formulated as a convex optimization problem with the sum of linear data fidelity with respect to spatial principal components and spatiotemporal total variation regularization imposed on 4D image phases. The solution algorithm of PCR is developed based on alternating direction method of multipliers. The implementation is fully parallelized on GPU with NVIDIA CUDA toolbox and each reconstruction takes about a few minutes. The proposed PCR method is validated and compared with a state-of-art method, that is, PICCS, using both simulation and experimental data with the on-board cone-beam CT setting. The results demonstrated the feasibility of PCR for cine CBCT and significantly improved reconstruction quality of PCR from PICCS for cine CBCT. With a priori estimated temporal motion coefficients using fluoroscopic training projections, the PCR method can accurately reconstruct spatial principal components, and then generate cine CT images as a product of temporal motion coefficients and spatial principal components. © 2017 American Association of Physicists in Medicine.

  2. Optimization of breast mass classification using sequential forward floating selection (SFFS) and a support vector machine (SVM) model

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Improving radiologists’ performance in classification between malignant and benign breast lesions is important to increase cancer detection sensitivity and reduce false-positive recalls. For this purpose, developing computer-aided diagnosis (CAD) schemes has been attracting research interest in recent years. In this study, we investigated a new feature selection method for the task of breast mass classification. Methods: We initially computed 181 image features based on mass shape, spiculation, contrast, presence of fat or calcifications, texture, isodensity, and other morphological features. From this large image feature pool, we used a sequential forward floating selection (SFFS)-based feature selection method to select relevant features, and analyzed their performance using a support vector machine (SVM) model trained for the classification task. On a database of 600 benign and 600 malignant mass regions of interest (ROIs), we performed the study using a ten-fold cross-validation method. Feature selection and optimization of the SVM parameters were conducted on the training subsets only. Results: The area under the receiver operating characteristic curve (AUC) = 0.805±0.012 was obtained for the classification task. The results also showed that the most frequently-selected features by the SFFS-based algorithm in 10-fold iterations were those related to mass shape, isodensity and presence of fat, which are consistent with the image features frequently used by radiologists in the clinical environment for mass classification. The study also indicated that accurately computing mass spiculation features from the projection mammograms was difficult, and failed to perform well for the mass classification task due to tissue overlap within the benign mass regions. Conclusions: In conclusion, this comprehensive feature analysis study provided new and valuable information for optimizing computerized mass classification schemes that may have potential to be useful as a “second reader” in future clinical practice. PMID:24664267

  3. Strategies for Optimizing Strength, Power, and Muscle Hypertrophy in Women.

    DTIC Science & Technology

    1997-09-01

    the injury risks and inefficiencies of other methods for the more sophisticated assessment of human muscular strength and power. To provide...an environment of total safety. Limiting catches prevent injury through falling or loss of control of the loaded bar and a specially designed...J., Rodman, K.W., and Sebolt, D.R. The effect of endurance running on training adaptations in women participating in a weightlifting program. J

  4. Intelligent Tutoring Methods for Optimizing Learning Outcomes with Embedded Training

    DTIC Science & Technology

    2009-10-01

    after action review. Particularly with free - play virtual environments, it is important to constrain the development task for constructing an...evaluation approach. Attempts to model all possible variations of correct performance can be prohibitive in free - play scenarios, and so for such conditions...member R for proper execution during free - play execution. In the first tier, the evaluation must know when it applies, or more specifically, when

  5. Predicting Transmembrane Helix Packing Arrangements using Residue Contacts and a Force-Directed Algorithm

    PubMed Central

    Nugent, Timothy; Jones, David T.

    2010-01-01

    Alpha-helical transmembrane proteins constitute roughly 30% of a typical genome and are involved in a wide variety of important biological processes including cell signalling, transport of membrane-impermeable molecules and cell recognition. Despite significant efforts to predict transmembrane protein topology, comparatively little attention has been directed toward developing a method to pack the helices together. Here, we present a novel approach to predict lipid exposure, residue contacts, helix-helix interactions and finally the optimal helical packing arrangement of transmembrane proteins. Using molecular dynamics data, we have trained and cross-validated a support vector machine (SVM) classifier to predict per residue lipid exposure with 69% accuracy. This information is combined with additional features to train a second SVM to predict residue contacts which are then used to determine helix-helix interaction with up to 65% accuracy under stringent cross-validation on a non-redundant test set. Our method is also able to discriminate native from decoy helical packing arrangements with up to 70% accuracy. Finally, we employ a force-directed algorithm to construct the optimal helical packing arrangement which demonstrates success for proteins containing up to 13 transmembrane helices. This software is freely available as source code from http://bioinf.cs.ucl.ac.uk/memsat/mempack/. PMID:20333233

  6. A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction

    PubMed Central

    Spencer, Matt; Eickholt, Jesse; Cheng, Jianlin

    2014-01-01

    Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80% and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test data set of 198 proteins, achieving a Q3 accuracy of 80.7% and a Sov accuracy of 74.2%. PMID:25750595

  7. A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction.

    PubMed

    Spencer, Matt; Eickholt, Jesse; Jianlin Cheng

    2015-01-01

    Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent.

  8. An online air pollution forecasting system using neural networks.

    PubMed

    Kurt, Atakan; Gulbagci, Betul; Karaca, Ferhat; Alagha, Omar

    2008-07-01

    In this work, an online air pollution forecasting system for Greater Istanbul Area is developed. The system predicts three air pollution indicator (SO(2), PM(10) and CO) levels for the next three days (+1, +2, and +3 days) using neural networks. AirPolTool, a user-friendly website (http://airpol.fatih.edu.tr), publishes +1, +2, and +3 days predictions of air pollutants updated twice a day. Experiments presented in this paper show that quite accurate predictions of air pollutant indicator levels are possible with a simple neural network. It is shown that further optimizations of the model can be achieved using different input parameters and different experimental setups. Firstly, +1, +2, and +3 days' pollution levels are predicted independently using same training data, then +2 and +3 days are predicted cumulatively using previously days predicted values. Better prediction results are obtained in the cumulative method. Secondly, the size of training data base used in the model is optimized. The best modeling performance with minimum error rate is achieved using 3-15 past days in the training data set. Finally, the effect of the day of week as an input parameter is investigated. Better forecasts with higher accuracy are observed using the day of week as an input parameter.

  9. Balancing Training Techniques for Flight Controller Certification

    NASA Technical Reports Server (NTRS)

    Gosling, Christina

    2011-01-01

    Training of ground control teams has been a difficult task in space operations. There are several intangible skills that must be learned to become the steely eyed men and women of mission control who respond to spacecraft failures that can lead to loss of vehicle or crew if handled improperly. And as difficult as training is, it can also be costly. Every day, month or year an operator is in training, is a day that not only they are being trained without direct benefit to the organization, but potentially an instructor or mentor is also being paid for hours spent assisting them. Therefore, optimization of the training flow is highly desired. Recently the Expedition Division (DI) at Johnson Space Flight Center has recreated their training flows for the purpose of both moving to an operator/specialist/instructor hierarchy and to address past inefficiencies in the training flow. This paper will discuss the types of training DI is utilizing in their new flows, and the balance that has been struck between the ideal learning environments and realistic constraints. Specifically, the past training flow for the ISS Attitude Determination and Control Officer will be presented, including drawbacks that were encountered. Then the new training flow will be discussed and how a new approach utilizes more training methods and teaching techniques. We will look at how DI has integrated classes, workshops, checkouts, module reviews, scenarios, OJT, paper sims, Mini Sims, and finally Integrated Sims to balance the cost and timing of training a new flight controller.

  10. STAR-GALAXY CLASSIFICATION IN MULTI-BAND OPTICAL IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadely, Ross; Willman, Beth; Hogg, David W.

    2012-11-20

    Ground-based optical surveys such as PanSTARRS, DES, and LSST will produce large catalogs to limiting magnitudes of r {approx}> 24. Star-galaxy separation poses a major challenge to such surveys because galaxies-even very compact galaxies-outnumber halo stars at these depths. We investigate photometric classification techniques on stars and galaxies with intrinsic FWHM <0.2 arcsec. We consider unsupervised spectral energy distribution template fitting and supervised, data-driven support vector machines (SVMs). For template fitting, we use a maximum likelihood (ML) method and a new hierarchical Bayesian (HB) method, which learns the prior distribution of template probabilities from the data. SVM requires training datamore » to classify unknown sources; ML and HB do not. We consider (1) a best-case scenario (SVM{sub best}) where the training data are (unrealistically) a random sampling of the data in both signal-to-noise and demographics and (2) a more realistic scenario where training is done on higher signal-to-noise data (SVM{sub real}) at brighter apparent magnitudes. Testing with COSMOS ugriz data, we find that HB outperforms ML, delivering {approx}80% completeness, with purity of {approx}60%-90% for both stars and galaxies. We find that no algorithm delivers perfect performance and that studies of metal-poor main-sequence turnoff stars may be challenged by poor star-galaxy separation. Using the Receiver Operating Characteristic curve, we find a best-to-worst ranking of SVM{sub best}, HB, ML, and SVM{sub real}. We conclude, therefore, that a well-trained SVM will outperform template-fitting methods. However, a normally trained SVM performs worse. Thus, HB template fitting may prove to be the optimal classification method in future surveys.« less

  11. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disney, Adam; Reynolds, John

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  12. Optimization education after project implementation: sharing "lessons learned" with staff.

    PubMed

    Vaughn, Susan

    2011-01-01

    Implementations involving healthcare technology solutions focus on providing end-user education prior to the application going "live" in the organization. Benefits to postimplementation education for staff should be included when planning these projects. This author describes the traditional training provided during the implementation of a bar-coding medication project and then the optimization training 8 weeks later.

  13. On the use of harmony search algorithm in the training of wavelet neural networks

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2015-10-01

    Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.

  14. The Effect of the Duration of Basic Life Support Training on the Learners' Cardiopulmonary and Automated External Defibrillator Skills

    PubMed Central

    Kang, Ku Hyun; Song, Keun Jeong; Lee, Chang Hee

    2016-01-01

    Background. Basic life support (BLS) training with hands-on practice can improve performance during simulated cardiac arrest, although the optimal duration for BLS training is unknown. This study aimed to assess the effectiveness of various BLS training durations for acquiring cardiopulmonary resuscitation (CPR) and automated external defibrillator (AED) skills. Methods. We randomised 485 South Korean nonmedical college students into four levels of BLS training: level 1 (40 min), level 2 (80 min), level 3 (120 min), and level 4 (180 min). Before and after each level, the participants completed questionnaires regarding their willingness to perform CPR and use AEDs, and their psychomotor skills for CPR and AED use were assessed using a manikin with Skill-Reporter™ software. Results. There were no significant differences between levels 1 and 2, although levels 3 and 4 exhibited significant differences in the proportion of overall adequate chest compressions (p < 0.001) and average chest compression depth (p = 0.003). All levels exhibited a greater posttest willingness to perform CPR and use AEDs (all, p < 0.001). Conclusions. Brief BLS training provided a moderate level of skill for performing CPR and using AEDs. However, high-quality skills for CPR required longer and hands-on training, particularly hands-on training with AEDs. PMID:27529066

  15. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    PubMed

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity, while neighbour-based methods (KNN, OLS, LSA, LLS) performed better in data with higher complexity. We also found that the EBS and STS schemes serve as complementary and effective tools for selecting the optimal imputation algorithm.

  16. Adaptive fault feature extraction from wayside acoustic signals from train bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie

    2018-07-01

    Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.

  17. Learning toward practical head pose estimation

    NASA Astrophysics Data System (ADS)

    Sang, Gaoli; He, Feixiang; Zhu, Rong; Xuan, Shibin

    2017-08-01

    Head pose is useful information for many face-related tasks, such as face recognition, behavior analysis, human-computer interfaces, etc. Existing head pose estimation methods usually assume that the face images have been well aligned or that sufficient and precise training data are available. In practical applications, however, these assumptions are very likely to be invalid. This paper first investigates the impact of the failure of these assumptions, i.e., misalignment of face images, uncertainty and undersampling of training data, on head pose estimation accuracy of state-of-the-art methods. A learning-based approach is then designed to enhance the robustness of head pose estimation to these factors. To cope with misalignment, instead of using hand-crafted features, it seeks suitable features by learning from a set of training data with a deep convolutional neural network (DCNN), such that the training data can be best classified into the correct head pose categories. To handle uncertainty and undersampling, it employs multivariate labeling distributions (MLDs) with dense sampling intervals to represent the head pose attributes of face images. The correlation between the features and the dense MLD representations of face images is approximated by a maximum entropy model, whose parameters are optimized on the given training data. To estimate the head pose of a face image, its MLD representation is first computed according to the model based on the features extracted from the image by the trained DCNN, and its head pose is then assumed to be the one corresponding to the peak in its MLD. Evaluation experiments on the Pointing'04, FacePix, Multi-PIE, and CASIA-PEAL databases prove the effectiveness and efficiency of the proposed method.

  18. A smartphone application of alcohol resilience treatment for behavioral self-control training.

    PubMed

    Yu, Fei; Albers, Jörg; Gao, Tian; Wang, Minghao; Bilberg, Arne; Stenager, Elsebeth

    2012-01-01

    High relapse rate is one of the most prominent problems in addiction treatment. Alcohol Resilience Treatment (ART), an alcohol addiction therapy, is based on Cue Exposure Treatment, which has shown promising results in preliminary studies. ART aims at optimizing the core area of relapse prevention, and intends to improve patients' capability to withstand craving of alcohol. This method emphasizes the interplay of resilience and resourcefulness. It contains 6 sessions with different topics according to the stage of treatment circuit, and each session consists of 6 steps. Due to the purity and structure of the treatment rationale, it is realistic, reasonable and manageable to transform the method into a smartphone application. An ART app in Android system and an accessory of bilateral tactile stimulation were developed and will be used in a study with behavioral self-control training. This paper presents the design and realization of the smartphone based ART application. The design of a pilot study, which is to examine the benefits of a smartphone application providing behavioral self-control training, is also reported in this paper.

  19. How can clinician-educator training programs be optimized to match clinician motivations and concerns?

    PubMed Central

    McCullough, Brendan; Marton, Gregory E; Ramnanan, Christopher J

    2015-01-01

    Background Several medical schools have implemented programs aimed at supporting clinician-educators with formal mentoring, training, and experience in undergraduate medical teaching. However, consensus program design has yet to be established, and the effectiveness of these programs in terms of producing quality clinician-educator teaching remains unclear. The goal of this study was to review the literature to identify motivations and perceived barriers to clinician-educators, which in turn will improve clinician-educator training programs to better align with clinician-educator needs and concerns. Methods Review of medical education literature using the terms “attitudes”, “motivations”, “physicians”, “teaching”, and “undergraduate medical education” resulted in identification of key themes revealing the primary motivations and barriers involved in physicians teaching undergraduate medical students. Results A synthesis of articles revealed that physicians are primarily motivated to teach undergraduate students for intrinsic reasons. To a lesser extent, physicians are motivated to teach for extrinsic reasons, such as rewards or recognition. The key barriers deterring physicians from teaching medical students included: decreased productivity, lack of compensation, increased length of the working day, patient concerns/ethical issues, and lack of confidence in their own ability. Conclusion Our findings suggest that optimization of clinician-educator training programs should address, amongst other factors, time management concerns, appropriate academic recognition for teaching service, and confidence in teaching ability. Addressing these issues may increase the retention of clinicians who are active and proficient in medical education. PMID:25653570

  20. Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.

    PubMed

    Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T

    2013-01-01

    Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.

  1. Stable and accurate methods for identification of water bodies from Landsat series imagery using meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid

    2017-10-01

    Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.

  2. The Influence of Gender on ProfessionalismFemale in Trainees.

    PubMed

    Ahn, Jae Hee

    2012-06-01

    This study aimed to analyze the experience of female trainees who were trained in hospitals after graduating from medical school, focusing on methods of representing their gender in training courses. We interviewed 8 trainees who had been trained in a hospital in Seoul and 4 faculties from June 2010 to October 2010. We analyzed their similarities and differences and developed a vocational identity formation process to represent gender. Gender was represented contradictorily in their training course, affecting their choice of specialties and interactions with patients. But, female trainees did not want to their being distinguished from their male counterparts with regard to being a good doctor to be influenced by meritocracy. It was difficult for them to bear children and balance work and family life due to aspects of the training system, including long work hours and the lack of replacement workers. Consequently, they asked their parents to help with child care, because hospitals are not interested in the maternity system. Female trainees did not consider being a doctor to be a male profession. Likely, they believed that their femininity influenced their professionalism positively. The methods of representing gender are influenced by the training system, based a male-dominated apprenticeship. Thus, we will research the mechanisms that influence gender-discriminated choices in specialties, hospitals, and medical schools and prepare a maternity care system for female trainees. Strategies that maximize recruitment and retention of women in medicine should include a consideration of alternative work schedules and optimization of maternity leave and child care opportunities.

  3. Biofeedback for robotic gait rehabilitation

    PubMed Central

    Lünenburger, Lars; Colombo, Gery; Riener, Robert

    2007-01-01

    Background Development and increasing acceptance of rehabilitation robots as well as advances in technology allow new forms of therapy for patients with neurological disorders. Robot-assisted gait therapy can increase the training duration and the intensity for the patients while reducing the physical strain for the therapist. Optimal training effects during gait therapy generally depend on appropriate feedback about performance. Compared to manual treadmill therapy, there is a loss of physical interaction between therapist and patient with robotic gait retraining. Thus, it is difficult for the therapist to assess the necessary feedback and instructions. The aim of this study was to define a biofeedback system for a gait training robot and test its usability in subjects without neurological disorders. Methods To provide an overview of biofeedback and motivation methods applied in gait rehabilitation, previous publications and results from our own research are reviewed. A biofeedback method is presented showing how a rehabilitation robot can assess the patients' performance and deliver augmented feedback. For validation, three subjects without neurological disorders walked in a rehabilitation robot for treadmill training. Several training parameters, such as body weight support and treadmill speed, were varied to assess the robustness of the biofeedback calculation to confounding factors. Results The biofeedback values correlated well with the different activity levels of the subjects. Changes in body weight support and treadmill velocity had a minor effect on the biofeedback values. The synchronization of the robot and the treadmill affected the biofeedback values describing the stance phase. Conclusion Robot-aided assessment and feedback can extend and improve robot-aided training devices. The presented method estimates the patients' gait performance with the use of the robot's existing sensors, and displays the resulting biofeedback values to the patients and therapists. The therapists can adapt the therapy and give further instructions to the patients. The feedback might help the patients to adapt their movement patterns and to improve their motivation. While it is assumed that these novel methods also improve training efficacy, the proof will only be possible with future in-depth clinical studies. PMID:17244363

  4. Psychological first aid training for the faith community: a model curriculum.

    PubMed

    McCabe, O Lee; Lating, Jeffrey M; Everly, George S; Mosley, Adrian M; Teague, Paula J; Links, Jonathan M; Kaminsky, Michael J

    2007-01-01

    Traditionally faith communities have served important roles in helping survivors cope in the aftermath of public health disasters. However, the provision of optimally effective crisis intervention services for persons experiencing acute or prolonged emotional trauma following such incidents requires specialized knowledge, skills, and abilities. Supported by a federally-funded grant, several academic health centers and faith-based organizations collaborated to develop a training program in Psychological First Aid (PFA) and disaster ministry for members of the clergy serving urban minorities and Latino immigrants in Baltimore, Maryland. This article describes the one-day training curriculum composed of four content modules: Stress Reactions of Mind-Body-Spirit, Psychological First Aid and Crisis Intervention, Pastoral Care and Disaster Ministry, and Practical Resources and Self Care for the Spiritual Caregiver Detailed descriptions of each module are provided, including its purpose; rationale and background literature; learning objectives; topics and sub-topics; and educational methods, materials and resources. The strengths, weaknesses, and future applications of the training template are discussed from the vantage points of participants' subjective reactions to the training.

  5. A decentralized training algorithm for Echo State Networks in distributed big data applications.

    PubMed

    Scardapane, Simone; Wang, Dianhui; Panella, Massimo

    2016-06-01

    The current big data deluge requires innovative solutions for performing efficient inference on large, heterogeneous amounts of information. Apart from the known challenges deriving from high volume and velocity, real-world big data applications may impose additional technological constraints, including the need for a fully decentralized training architecture. While several alternatives exist for training feed-forward neural networks in such a distributed setting, less attention has been devoted to the case of decentralized training of recurrent neural networks (RNNs). In this paper, we propose such an algorithm for a class of RNNs known as Echo State Networks. The algorithm is based on the well-known Alternating Direction Method of Multipliers optimization procedure. It is formulated only in terms of local exchanges between neighboring agents, without reliance on a coordinating node. Additionally, it does not require the communication of training patterns, which is a crucial component in realistic big data implementations. Experimental results on large scale artificial datasets show that it compares favorably with a fully centralized implementation, in terms of speed, efficiency and generalization accuracy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. New Age Teaching: Beyond Didactics

    PubMed Central

    Vlaovic, Peter D.; McDougall, Elspeth M.

    2006-01-01

    Widespread acceptance of laparoscopic urology techniques has posed many challenges to training urology residents and allowing postgraduate urologists to acquire often difficult new surgical skills. Several factors in surgical training programs are limiting the ability to train residents in the operating room, including limited-hours work weeks, increasing demand for operating room productivity, and general public awareness of medical errors. As such, surgical simulation may provide an opportunity to enhance residency experience and training, and optimize post-graduate acquisition of new skills and maintenance of competency. This review article explains and defines the various levels of validity as it pertains to surgical simulators. The most recently and comprehensively validity tested simulators are outlined and summarized. The potential role of surgical simulation in the formative and summative assessment of surgical trainees, as well as, the certification and recertification process of postgraduate surgeons will be delineated. Surgical simulation will be an important adjunct to the traditional methods of surgical skills training and will allow surgeons to maintain their proficiency in the technically challenging aspects of minimally invasive urologic surgery. PMID:17619704

  7. Evaluation of Semi-supervised Learning for Classification of Protein Crystallization Imagery.

    PubMed

    Sigdel, Madhav; Dinç, İmren; Dinç, Semih; Sigdel, Madhu S; Pusey, Marc L; Aygün, Ramazan S

    2014-03-01

    In this paper, we investigate the performance of two wrapper methods for semi-supervised learning algorithms for classification of protein crystallization images with limited labeled images. Firstly, we evaluate the performance of semi-supervised approach using self-training with naïve Bayesian (NB) and sequential minimum optimization (SMO) as the base classifiers. The confidence values returned by these classifiers are used to select high confident predictions to be used for self-training. Secondly, we analyze the performance of Yet Another Two Stage Idea (YATSI) semi-supervised learning using NB, SMO, multilayer perceptron (MLP), J48 and random forest (RF) classifiers. These results are compared with the basic supervised learning using the same training sets. We perform our experiments on a dataset consisting of 2250 protein crystallization images for different proportions of training and test data. Our results indicate that NB and SMO using both self-training and YATSI semi-supervised approaches improve accuracies with respect to supervised learning. On the other hand, MLP, J48 and RF perform better using basic supervised learning. Overall, random forest classifier yields the best accuracy with supervised learning for our dataset.

  8. Transfer of training in the development of intracorporeal suturing skill in medical student novices: a prospective randomized trial.

    PubMed

    Muresan, Claude; Lee, Tommy H; Seagull, Jacob; Park, Adrian E

    2010-10-01

    To help optimize the use of limited resources in trainee education, we developed a prospective randomized trial to determine the most effective means of teaching laparoscopic suturing to novices. Forty-one medical students received rudimentary instruction in intracorporeal suturing, then were pretested on a pig enterotomy model. They then were posttested after completion of 1 of 4 training arms: laparoscopic suturing, laparoscopic drills, open suturing, and virtual reality (VR) drills. Tests were scored for speed, accuracy, knot quality, and mental workload (National Aeronautics and Space Administration [NASA] Task Load Index). Paired t tests were used. Task time was improved in all groups except the VR group. Knot quality improved only in the open or laparoscopic suturing groups. Mental workload improved only for those practicing on a physical laparoscopic trainer. For novice trainees, the efficacy of VR training is questionable. In contrast, the other training methods had benefits in terms of time, quality, and perceived workload. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Training feed-forward neural networks with gain constraints

    PubMed

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  10. 19F Magnetic resonance imaging of perfluorooctanoic acid encapsulated in liposome for biodistribution measurement.

    PubMed

    Kimura, Atsuomi; Narazaki, Michiko; Kanazawa, Yoko; Fujiwara, Hideaki

    2004-07-01

    The tissue distribution of perfluorooctanoic acid (PFOA), which is known to show unique biological responses, has been visualized in female mice by (19)F magnetic resonance imaging (MRI) incorporated with the recent advances in microimaging technique. The chemical shift selected fast spin-echo method was applied to acquire in vivo (19)F MR images of PFOA. The in vivo T(1) and T(2) relaxation times of PFOA were proven to be extremely short, which were 140 (+/- 20) ms and 6.3 (+/- 2.2) ms, respectively. To acquire the in vivo (19)F MR images of PFOA, it was necessary to optimize the parameters of signal selection and echo train length. The chemical shift selection was effectively performed by using the (19)F NMR signal of CF(3) group of PFOA without the signal overlapping because the chemical shift difference between the CF(3) and neighbor signals reaches to 14 kHz. The most optimal echo train length to obtain (19)F images efficiently was determined so that the maximum echo time (TE) value in the fast spin-echo sequence was comparable to the in vivo T(2) value. By optimizing these parameters, the in vivo (19)F MR image of PFOA was enabled to obtain efficiently in 12 minutes. As a result, the time course of the accumulation of PFOA into the mouse liver was clearly pursued in the (19)F MR images. Thus, it was concluded that the (19)F MRI becomes the effective method toward the future pharmacological and toxicological studies of perfluorocarboxilic acids.

  11. Grayscale Optical Correlator Workbench

    NASA Technical Reports Server (NTRS)

    Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin

    2006-01-01

    Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.

  12. A training program for nurse scientists to promote intervention translation.

    PubMed

    Santacroce, Sheila Judge; Leeman, Jennifer; Song, Mi-Kyung

    To reduce the burden of chronic illness, prevention and management interventions must be efficacious, adopted and implemented with fidelity, and reach those at greatest risk. Yet, many research-tested interventions are slow to translate into practice. This paper describes how The University of North Carolina at Chapel Hill School of Nursing's NINR-funded institutional pre- and postdoctoral research-training program is addressing the imperative to speed knowledge translation across the research cycle. The training emphasizes six research methods ("catalysts") to speed translation: stakeholder engagement, patient-centered outcomes, intervention optimization and sequential multiple randomized trials (SMART), pragmatic trials, mixed methods approaches, and dissemination and implementation science strategies. Catalysts are integrated into required coursework, biweekly scientific and integrative seminars, and experiential research training. Trainee and program success is evaluated based on benchmarks applicable to all PhD program students, supplemented by indicators specific to the catalysts. Trainees must also demonstrate proficiency in at least two of the six catalysts in their scholarly products. Proficiency is assessed through their works in progress presentations and peer reviews at T32 integrative seminars. While maintaining the emphasis on theory-based interventions, we have integrated six catalysts into our ongoing research training to expedite the dynamic process of intervention development, testing, dissemination and implementation. Through a variety of training activities, our research training focused on theory-based interventions and the six catalysts will generate future nurse scientists who speed translation of theory-based interventions into practice to maximize health outcomes for patients, families, communities and populations affected by chronic illness. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm

    PubMed Central

    Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel

    2016-01-01

    This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (A z) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with A z = 0.9502 over a training set of 40 images and A z = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422

  14. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    PubMed

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  15. Spacecraft attitude control using neuro-fuzzy approximation of the optimal controllers

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Woo; Park, Sang-Young; Park, Chandeok

    2016-01-01

    In this study, a neuro-fuzzy controller (NFC) was developed for spacecraft attitude control to mitigate large computational load of the state-dependent Riccati equation (SDRE) controller. The NFC was developed by training a neuro-fuzzy network to approximate the SDRE controller. The stability of the NFC was numerically verified using a Lyapunov-based method, and the performance of the controller was analyzed in terms of approximation ability, steady-state error, cost, and execution time. The simulations and test results indicate that the developed NFC efficiently approximates the SDRE controller, with asymptotic stability in a bounded region of angular velocity encompassing the operational range of rapid-attitude maneuvers. In addition, it was shown that an approximated optimal feedback controller can be designed successfully through neuro-fuzzy approximation of the optimal open-loop controller.

  16. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    PubMed Central

    Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou

    2013-01-01

    Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491

  17. Multi-objective Optimization of Pulsed Gas Metal Arc Welding Process Using Neuro NSGA-II

    NASA Astrophysics Data System (ADS)

    Pal, Kamal; Pal, Surjya K.

    2018-05-01

    Weld quality is a critical issue in fabrication industries where products are custom-designed. Multi-objective optimization results number of solutions in the pareto-optimal front. Mathematical regression model based optimization methods are often found to be inadequate for highly non-linear arc welding processes. Thus, various global evolutionary approaches like artificial neural network, genetic algorithm (GA) have been developed. The present work attempts with elitist non-dominated sorting GA (NSGA-II) for optimization of pulsed gas metal arc welding process using back propagation neural network (BPNN) based weld quality feature models. The primary objective to maintain butt joint weld quality is the maximization of tensile strength with minimum plate distortion. BPNN has been used to compute the fitness of each solution after adequate training, whereas NSGA-II algorithm generates the optimum solutions for two conflicting objectives. Welding experiments have been conducted on low carbon steel using response surface methodology. The pareto-optimal front with three ranked solutions after 20th generations was considered as the best without further improvement. The joint strength as well as transverse shrinkage was found to be drastically improved over the design of experimental results as per validated pareto-optimal solutions obtained.

  18. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks

    PubMed Central

    Robinson, Y. Harold; Rajaram, M.

    2015-01-01

    Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966

  19. Association of Program Directors in Vascular Surgery (APDVS) survey of program selection, knowledge acquisition, and education provided as viewed by vascular trainees from two different training paradigms.

    PubMed

    Dalsing, Michael C; Makaroun, Michel S; Harris, Linda M; Mills, Joseph L; Eidt, John; Eckert, George J

    2012-02-01

    Methods of learning may differ between generations and even the level of training or the training paradigm, or both. To optimize education, it is important to optimize training designs, and the perspective of those being trained can aid in this quest. The Association of Program Directors in Vascular Surgery leadership sent a survey to all vascular surgical trainees (integrated [0/5], independent current and new graduates [5 + 2]) addressing various aspects of the educational experience. Of 412 surveys sent, 163 (∼40%) responded: 46 integrated, 96 fellows, and 21 graduates. The survey was completed by 52% of the integrated residents, 59% of the independent residents, and 20% of the graduates. When choosing a program for training, the integrated residents are most concerned with program atmosphere and the independent residents with total clinical volume. Concerns after training were thoracic and thoracoabdominal aneurysm procedures and business aspects: 40% to 50% integrated, and 60% fellows/graduates. Integrated trainees found periprocedural discussion the best feedback (79%), with 9% favoring written test review. Surgical training and vascular laboratory and venous training were judged "just right" by 87% and ∼71%, whereas business aspects needed more emphasis (65%-70%). Regarding the 80-hour workweek, 82% felt it prevented fatigue, and 24% thought it was detrimental to patient care. Independent program trainees also found periprocedural discussion the best feedback (71%), with 12% favoring written test review. Surgical training and vascular laboratory/venous training were "just right" by 87% and 60% to 70%, respectively, whereas business aspects needed more emphasis (∼65%-70%). Regarding the 80-hour workweek, 62% felt it was detrimental to patient care, and 42% felt it prevented fatigue. A supportive environment and adequate clinical volume will attract trainees to a program. For "an urgent need to know," the integrated trainees are especially turning to online texts rather than traditional textbooks, which suggests an opportunity for a shift in educational focus. Point-of-care is the best time for education and feedback, suggesting a continued need for dedicated faculty. The business side of training is underserved and should be addressed. Copyright © 2012. Published by Mosby, Inc.

  20. Semi-supervised learning for genomic prediction of novel traits with small reference populations: an application to residual feed intake in dairy cattle.

    PubMed

    Yao, Chen; Zhu, Xiaojin; Weigel, Kent A

    2016-11-07

    Genomic prediction for novel traits, which can be costly and labor-intensive to measure, is often hampered by low accuracy due to the limited size of the reference population. As an option to improve prediction accuracy, we introduced a semi-supervised learning strategy known as the self-training model, and applied this method to genomic prediction of residual feed intake (RFI) in dairy cattle. We describe a self-training model that is wrapped around a support vector machine (SVM) algorithm, which enables it to use data from animals with and without measured phenotypes. Initially, a SVM model was trained using data from 792 animals with measured RFI phenotypes. Then, the resulting SVM was used to generate self-trained phenotypes for 3000 animals for which RFI measurements were not available. Finally, the SVM model was re-trained using data from up to 3792 animals, including those with measured and self-trained RFI phenotypes. Incorporation of additional animals with self-trained phenotypes enhanced the accuracy of genomic predictions compared to that of predictions that were derived from the subset of animals with measured phenotypes. The optimal ratio of animals with self-trained phenotypes to animals with measured phenotypes (2.5, 2.0, and 1.8) and the maximum increase achieved in prediction accuracy measured as the correlation between predicted and actual RFI phenotypes (5.9, 4.1, and 2.4%) decreased as the size of the initial training set (300, 400, and 500 animals with measured phenotypes) increased. The optimal number of animals with self-trained phenotypes may be smaller when prediction accuracy is measured as the mean squared error rather than the correlation between predicted and actual RFI phenotypes. Our results demonstrate that semi-supervised learning models that incorporate self-trained phenotypes can achieve genomic prediction accuracies that are comparable to those obtained with models using larger training sets that include only animals with measured phenotypes. Semi-supervised learning can be helpful for genomic prediction of novel traits, such as RFI, for which the size of reference population is limited, in particular, when the animals to be predicted and the animals in the reference population originate from the same herd-environment.

  1. Improving Pattern Recognition and Neural Network Algorithms with Applications to Solar Panel Energy Optimization

    NASA Astrophysics Data System (ADS)

    Zamora Ramos, Ernesto

    Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.

  2. Application of a Web-Enabled Leg Training System for the Objective Monitoring and Quantitative Analysis of Exercise-Induced Fatigue

    PubMed Central

    Dedova, Irina V

    2016-01-01

    Background Sustained cardiac rehabilitation is the key intervention in the prevention and treatment of many human diseases. However, implementation of exercise programs can be challenging because of early fatigability in patients with chronic diseases, overweight individuals, and aged people. Current methods of fatigability assessment are based on subjective self-reporting such as rating of perceived exertion or require specialized laboratory conditions and sophisticated equipment. A practical approach allowing objective measurement of exercise-induced fatigue would be useful for the optimization of sustained delivery of cardiac rehabilitation to improve patient outcomes. Objectives The objective of this study is to develop and validate an innovative approach, allowing for the objective assessment of exercise-induced fatigue using the Web-enabled leg rehabilitation system. Methods MedExercise training devices were equipped with wireless temperature sensors in order to monitor their usage by temperature rise in the resistance unit (Δt°). Since Δt° correlated with the intensity and duration of exercise, this parameter was used to characterize participants’ leg work output (LWO). Personal smart devices such as laptop computers with wireless gateways and relevant software were used for monitoring of self-control training. Connection of smart devices to the Internet and cloud-based software allowed remote monitoring of LWO in participants training at home. Heart rates (HRs) were measured by fingertip pulse oximeters simultaneously with Δt° in 7 healthy volunteers. Results Exercise-induced fatigue manifested as the decline of LWO and/or rising HR, which could be observed in real-time. Conversely, training at the steady-state LWO and HR for the entire duration of exercise bout was considered as fatigue-free. The amounts of recommended daily physical activity were expressed as the individual Δt° values reached during 30-minute fatigue-free exercise of moderate intensity resulting in a mean of 8.1°C (SD 1.5°C, N=7). These Δt° values were applied as the thresholds for sending automatic notifications upon taking the personalized LWO doses by self-control training at home. While the mean time of taking LWO doses was 30.3 (SD 4.1) minutes (n=25), analysis of times required to reach the same Δt° by the same participant revealed that longer durations were due to fatigability, manifesting as reduced LWO at the later stages of training bouts. Typically, exercising in the afternoons associated with no fatigue, although longer durations of evening sessions suggested a diurnal fatigability pattern. Conclusions This pilot study demonstrated the feasibility of objective monitoring of fatigue development in real-time and online as well as retrospective fatigability quantification by the duration of training bouts to reach the same exercise dose. This simple method of leg training at home accompanied by routine fatigue monitoring might be useful for the optimization of exercise interventions in primary care and special populations. PMID:27549345

  3. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    NASA Astrophysics Data System (ADS)

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and effectively handle different types of input information to perform large-scale geostatistical modelling.

  4. Thermal and TEC anomalies detection using an intelligent hybrid system around the time of the Saravan, Iran, (Mw = 7.7) earthquake of 16 April 2013

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2014-02-01

    A powerful earthquake of Mw = 7.7 struck the Saravan region (28.107° N, 62.053° E) in Iran on 16 April 2013. Up to now nomination of an automated anomaly detection method in a non linear time series of earthquake precursor has been an attractive and challenging task. Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) have revealed strong potentials in accurate time series prediction. This paper presents the first study of an integration of ANN and PSO method in the research of earthquake precursors to detect the unusual variations of the thermal and total electron content (TEC) seismo-ionospheric anomalies induced by the strong earthquake of Saravan. In this study, to overcome the stagnation in local minimum during the ANN training, PSO as an optimization method is used instead of traditional algorithms for training the ANN method. The proposed hybrid method detected a considerable number of anomalies 4 and 8 days preceding the earthquake. Since, in this case study, ionospheric TEC anomalies induced by seismic activity is confused with background fluctuations due to solar activity, a multi-resolution time series processing technique based on wavelet transform has been applied on TEC signal variations. In view of the fact that the accordance in the final results deduced from some robust methods is a convincing indication for the efficiency of the method, therefore the detected thermal and TEC anomalies using the ANN + PSO method were compared to the results with regard to the observed anomalies by implementing the mean, median, Wavelet, Kalman filter, Auto-Regressive Integrated Moving Average (ARIMA), Support Vector Machine (SVM) and Genetic Algorithm (GA) methods. The results indicate that the ANN + PSO method is quite promising and deserves serious attention as a new tool for thermal and TEC seismo anomalies detection.

  5. Measuring the influence of a mental health training module on the therapeutic optimism of advanced nurse practitioner students in the United Kingdom.

    PubMed

    Hemingway, Steve; Rogers, Melanie; Elsom, Stephen

    2014-03-01

    To evaluate the influence of a mental health training module on the therapeutic optimism of advanced nurse practitioner (ANP) students in primary care (family practice). Three cohorts of ANPs who undertook a Mental Health Problems in Primary Care Module as part of their MSc ANP (primary care) run by the University of Huddersfield completed the Elsom Therapeutic Optimism Scale (ETOS), in a pre- and postformat. The ETOS is a 10-item, self-administered scale, which has been used to evaluate therapeutic optimism previously in mental health professionals. All three cohorts who completed the scale showed an improvement in their therapeutic optimism scores. With stigma having such a detrimental effect for people diagnosed with a mental health problem, ANPs who are more mental health literate facilitated by education and training in turn facilitates them to have the skills and confidence to engage and inspire hope for the person diagnosed with mental health problems. ©2013 The Author(s) ©2013 American Association of Nurse Practitioners.

  6. Lactate Threshold as a Measure of Aerobic Metabolism in Resistance Exercise.

    PubMed

    Domínguez, Raúl; Maté-Muñoz, José Luis; Serra-Paya, Noemí; Garnacho-Castaño, Manuel Vicente

    2018-02-01

    In resistance training, load intensity is usually calculated as the percentage of a maximum repetition (1RM) or maximum number of possible repetitions (% of 1RM). Some studies have proposed a lactate threshold (LT) intensity as an optimal approach for concurrent training of cardiorespiratory endurance and muscle strength, as well as an alternative in resistance training. The objective of the present study was to analyze the results obtained in research evaluating the use of LT in resistance training. A keyword and search tree strategy identified 14 relevant articles in the Dialnet, Elsevier, Medline, Pubmed, Scopus and Web of Science databases. Based on the studies analyzed, the conclusion was that the LT in resistance exercises can be determined either by mathematical methods or by visual inspection of graphical plots. Another possibility is to measure the intensity at which LT might coincide with the first ventilatory threshold (VT1). Since performing an exercise session at one's LT intensity has been shown to accelerate the cardiorespiratory response and induce neuromuscular fatigue, this intensity could be used to set the training load in a resistance training program. © Georg Thieme Verlag KG Stuttgart · New York.

  7. SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.

    PubMed

    Lee, Hyunyeol; Park, Jaeseok

    2013-07-01

    Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging. Copyright © 2012 Wiley Periodicals, Inc.

  8. Prediction of near-surface soil moisture at large scale by digital terrain modeling and neural networks.

    PubMed

    Lavado Contador, J F; Maneta, M; Schnabel, S

    2006-10-01

    The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.

  9. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).

    PubMed

    Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T

    2016-03-08

    Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.

  10. Optimization of visual training for full recovery from severe amblyopia in adults

    PubMed Central

    Eaton, Nicolette C.; Sheehan, Hanna Marie

    2016-01-01

    The severe amblyopia induced by chronic monocular deprivation is highly resistant to reversal in adulthood. Here we use a rodent model to show that recovery from deprivation amblyopia can be achieved in adults by a two-step sequence, involving enhancement of synaptic plasticity in the visual cortex by dark exposure followed immediately by visual training. The perceptual learning induced by visual training contributes to the recovery of vision and can be optimized to drive full recovery of visual acuity in severely amblyopic adults. PMID:26787781

  11. Optimization of Training Sets For Neural-Net Processing of Characteristic Patterns From Vibrating Solids

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J. (Inventor)

    2006-01-01

    An artificial neural network is disclosed that processes holography generated characteristic pattern of vibrating structures along with finite-element models. The present invention provides for a folding operation for conditioning training sets for optimally training forward-neural networks to process characteristic fringe pattern. The folding pattern increases the sensitivity of the feed-forward network for detecting changes in the characteristic pattern The folding routine manipulates input pixels so as to be scaled according to the location in an intensity range rather than the position in the characteristic pattern.

  12. Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Greenman, Roxana M.

    1998-01-01

    The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.

  13. The Effects of an Emotion Strengthening Training Program on the Optimism Level of Nurses

    ERIC Educational Resources Information Center

    Balci Celik, Seher

    2008-01-01

    The aim of this study is to investigate the effects of emotion strengthening as a training programme for optimism for nurses. The experimental and control group of this research was totally composed of 20 nurses. A pre-test post-test research model with control group was used in this research. Nurses' optimistm levels have been measured by…

  14. New wideband radar target classification method based on neural learning and modified Euclidean metric

    NASA Astrophysics Data System (ADS)

    Jiang, Yicheng; Cheng, Ping; Ou, Yangkui

    2001-09-01

    A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.

  15. Cell transmission model of dynamic assignment for urban rail transit networks.

    PubMed

    Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian

    2017-01-01

    For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.

  16. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  17. Detection of Delamination in Concrete Bridge Decks Using Mfcc of Acoustic Impact Signals

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Harichandran, R. S.; Ramuhalli, P.

    2010-02-01

    Delamination of the concrete cover is a commonly observed damage in concrete bridge decks. The delamination is typically initiated by corrosion of the upper reinforcing bars and promoted by freeze-thaw cycling and traffic loading. The detection of delamination is important for bridge maintenance and acoustic non-destructive evaluation (NDE) is widely used due to its low cost, speed, and easy implementation. In traditional acoustic approaches, the inspector sounds the surface of the deck by impacting it with a hammer or bar, or by dragging a chain, and assesses delamination by the "hollowness" of the sound. The detection of the delamination is subjective and requires extensive training. To improve performance, this paper proposes an objective method for delamination detection. In this method, mel-frequency cepstral coefficients (MFCC) of the signal are extracted. Some MFCC are then selected as features for detection purposes using a mutual information criterion. Finally, the selected features are used to train a classifier which is subsequently used for detection. In this work, a simple quadratic Bayesian classifier is used. Different numbers of features are used to compare the performance of the detection method. The results show that the performance first increases with the number of features, but then decreases after an optimal value. The optimal number of features based on the recorded signals is four, and the mean error rate is only 3.3% when four features are used. Therefore, the proposed algorithm has sufficient accuracy to be used in field detection.

  18. Protein-protein interaction site predictions with minimum covariance determinant and Mahalanobis distance.

    PubMed

    Qiu, Zhijun; Zhou, Bo; Yuan, Jiangfeng

    2017-11-21

    Protein-protein interaction site (PPIS) prediction must deal with the diversity of interaction sites that limits their prediction accuracy. Use of proteins with unknown or unidentified interactions can also lead to missing interfaces. Such data errors are often brought into the training dataset. In response to these two problems, we used the minimum covariance determinant (MCD) method to refine the training data to build a predictor with better performance, utilizing its ability of removing outliers. In order to predict test data in practice, a method based on Mahalanobis distance was devised to select proper test data as input for the predictor. With leave-one-validation and independent test, after the Mahalanobis distance screening, our method achieved higher performance according to Matthews correlation coefficient (MCC), although only a part of test data could be predicted. These results indicate that data refinement is an efficient approach to improve protein-protein interaction site prediction. By further optimizing our method, it is hopeful to develop predictors of better performance and wide range of application. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A Standardized System of Training Intensity Guidelines for the Sports of Track and Field and Cross Country

    ERIC Educational Resources Information Center

    Belcher, Christopher P.; Pemberton, Cynthia Lee A.

    2012-01-01

    Accurate quantification of training intensity is an essential component of a training program (Rowbottom, 2000). A training program designed to optimize athlete performance abilities cannot be practically planned or implemented without a valid and reliable indication of training intensity and its effect on the physiological mechanisms of the human…

  20. Model-based strategy for cell culture seed train layout verified at lab scale.

    PubMed

    Kern, Simon; Platas-Barradas, Oscar; Pörtner, Ralf; Frahm, Björn

    2016-08-01

    Cell culture seed trains-the generation of a sufficient viable cell number for the inoculation of the production scale bioreactor, starting from incubator scale-are time- and cost-intensive. Accordingly, a seed train offers potential for optimization regarding its layout and the corresponding proceedings. A tool has been developed to determine the optimal points in time for cell passaging from one scale into the next and it has been applied to two different cell lines at lab scale, AGE1.HN AAT and CHO-K1. For evaluation, experimental seed train realization has been evaluated in comparison to its layout. In case of the AGE1.HN AAT cell line, the results have also been compared to the formerly manually designed seed train. The tool provides the same seed train layout based on the data of only two batches.

  1. Surgical simulation: Current practices and future perspectives for technical skills training.

    PubMed

    Bjerrum, Flemming; Thomsen, Ann Sofia Skou; Nayahangan, Leizl Joy; Konge, Lars

    2018-06-17

    Simulation-based training (SBT) has become a standard component of modern surgical education, yet successful implementation of evidence-based training programs remains challenging. In this narrative review, we use Kern's framework for curriculum development to describe where we are now and what lies ahead for SBT within surgery with a focus on technical skills in operative procedures. Despite principles for optimal SBT (proficiency-based, distributed, and deliberate practice) having been identified, massed training with fixed time intervals or a fixed number of repetitions is still being extensively used, and simulators are generally underutilized. SBT should be part of surgical training curricula, including theoretical, technical, and non-technical skills, and be based on relevant needs assessments. Furthermore, training should follow evidence-based theoretical principles for optimal training, and the effect of training needs to be evaluated using relevant outcomes. There is a larger, still unrealized potential of surgical SBT, which may be realized in the near future as simulator technologies evolve, more evidence-based training programs are implemented, and cost-effectiveness and impact on patient safety is clearly demonstrated.

  2. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data

    PubMed Central

    2017-01-01

    In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718

  3. Preventing overtraining in athletes in high-intensity sports and stress/recovery monitoring.

    PubMed

    Kellmann, M

    2010-10-01

    In sports, the importance of optimizing the recovery-stress state is critical. Effective recovery from intense training loads often faced by elite athletes can often determine sporting success or failure. In recent decades, athletes, coaches, and sport scientists have been keen to find creative, new methods for improving the quality and quantity of training for athletes. These efforts have consistently faced barriers, including overtraining, fatigue, injury, illness, and burnout. Physiological and psychological limits dictate a need for research that addresses the avoidance of overtraining, maximizes recovery, and successfully negotiates the fine line between high and excessive training loads. Monitoring instruments like the Recovery-Stress Questionnaire for Athletes can assist with this research by providing a tool to assess their perceived state of recovery. This article will highlight the importance of recovery for elite athletes and provide an overview of monitoring instruments. © 2010 John Wiley & Sons A/S.

  4. Motor imagery: lessons learned in movement science might be applicable for spaceflight

    PubMed Central

    Bock, Otmar; Schott, Nadja; Papaxanthis, Charalambos

    2015-01-01

    Before participating in a space mission, astronauts undergo parabolic-flight and underwater training to facilitate their subsequent adaptation to weightlessness. Unfortunately, similar training methods can’t be used to prepare re-adaptation to planetary gravity. Here, we propose a quick, simple and inexpensive approach that could be used to prepare astronauts both for the absence and for the renewed presence of gravity. This approach is based on motor imagery (MI), a process in which actions are produced in working memory without any overt output. Training protocols based on MI have repeatedly been shown to modify brain circuitry and to improve motor performance in healthy young adults, healthy seniors and stroke victims, and are routinely used to optimize performance of elite athletes. We propose to use similar protocols preflight, to prepare for weightlessness, and late inflight, to prepare for landing. PMID:26042004

  5. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  6. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    PubMed

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  7. Use of Multivariate Techniques to Validate and Improve the Current USAF Pilot Candidate Selection Model

    DTIC Science & Technology

    2003-03-01

    organizations . Reducing attrition rates through optimal selection decisions can “reduce training cost, improve job performance, and enhance...capturing the weights for use in the SNR method is not straightforward. A special VBA application had to be written to capture and organize the network...before the VBA application can be used. Appendix D provides the VBA code used to import and organize the network weights and input standardization

  8. Intelligent Tutoring Methods for Optimizing Learning Outcomes with Embedded Training

    DTIC Science & Technology

    2009-01-01

    used to stimulate learning activities, from practice events with real-time coaching, to exercises with after action review. Particularly with free - play virtual...variations of correct performance can be prohibitive in free - play scenarios, and so for such conditions this has led to a state-based approach for...tiered logic that evaluates team member R for proper execution during free - play execution. In the first tier, the evaluation must know when it

  9. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  10. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  11. Method for identifying known materials within a mixture of unknowns

    DOEpatents

    Wagner, John S.

    2000-01-01

    One or both of two methods and systems are used to determine concentration of a known material in an unknown mixture on the basis of the measured interaction of electromagnetic waves upon the mixture. One technique is to utilize a multivariate analysis patch technique to develop a library of optimized patches of spectral signatures of known materials containing only those pixels most descriptive of the known materials by an evolutionary algorithm. Identity and concentration of the known materials within the unknown mixture is then determined by minimizing the residuals between the measurements from the library of optimized patches and the measurements from the same pixels from the unknown mixture. Another technique is to train a neural network by the genetic algorithm to determine the identity and concentration of known materials in the unknown mixture. The two techniques may be combined into an expert system providing cross checks for accuracy.

  12. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  13. Automated Deep Learning-Based System to Identify Endothelial Cells Derived from Induced Pluripotent Stem Cells.

    PubMed

    Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi

    2018-06-05

    Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  15. In situ wavefront correction and its application to micromanipulation

    NASA Astrophysics Data System (ADS)

    Čižmár, Tomáš; Mazilu, Michael; Dholakia, Kishan

    2010-06-01

    In any optical system, distortions to a propagating wavefront reduce the spatial coherence of a light field, making it increasingly difficult to obtain the theoretical diffraction-limited spot size. Such aberrations are severely detrimental to optimal performance in imaging, nanosurgery, nanofabrication and micromanipulation, as well as other techniques within modern microscopy. We present a generic method based on complex modulation for true in situ wavefront correction that allows compensation of all aberrations along the entire optical train. The power of the method is demonstrated for the field of micromanipulation, which is very sensitive to wavefront distortions. We present direct trapping with optimally focused laser light carrying power of a fraction of a milliwatt as well as the first trapping through highly turbid and diffusive media. This opens up new perspectives for optical micromanipulation in colloidal and biological physics and may be useful for various forms of advanced imaging.

  16. Optimizing Equivalence-Based Instruction: Effects of Training Protocols on Equivalence Class Formation

    ERIC Educational Resources Information Center

    Fienup, Daniel M.; Wright, Nicole A.; Fields, Lanny

    2015-01-01

    Two experiments evaluated the effects of the simple-to-complex and simultaneous training protocols on the formation of academically relevant equivalence classes. The simple-to-complex protocol intersperses derived relations probes with training baseline relations. The simultaneous protocol conducts all training trials and test trials in separate…

  17. Analysis of Postdoctoral Training Outcomes That Broaden Participation in Science Careers

    ERIC Educational Resources Information Center

    Rybarczyk, Brian J.; Lerea, Leslie; Whittington, Dawayne; Dykstra, Linda

    2016-01-01

    Postdoctoral training is an optimal time to expand research skills, develop independence, and shape career trajectories, making this training period important to study in the context of career development. Seeding Postdoctoral Innovators in Research and Education (SPIRE) is a training program that balances research, teaching, and professional…

  18. Time-dependent fermentation control strategies for enhancing synthesis of marine bacteriocin 1701 using artificial neural network and genetic algorithm.

    PubMed

    Peng, Jiansheng; Meng, Fanmei; Ai, Yuncan

    2013-06-01

    The artificial neural network (ANN) and genetic algorithm (GA) were combined to optimize the fermentation process for enhancing production of marine bacteriocin 1701 in a 5-L-stirred-tank. Fermentation time, pH value, dissolved oxygen level, temperature and turbidity were used to construct a "5-10-1" ANN topology to identify the nonlinear relationship between fermentation parameters and the antibiotic effects (shown as in inhibition diameters) of bacteriocin 1701. The predicted values by the trained ANN model were coincided with the observed ones (the coefficient of R(2) was greater than 0.95). As the fermentation time was brought in as one of the ANN input nodes, fermentation parameters could be optimized by stages through GA, and an optimal fermentation process control trajectory was created. The production of marine bacteriocin 1701 was significantly improved by 26% under the guidance of fermentation control trajectory that was optimized by using of combined ANN-GA method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Personality traits associated with genetic counselor compassion fatigue: the roles of dispositional optimism and locus of control.

    PubMed

    Injeyan, Marie C; Shuman, Cheryl; Shugar, Andrea; Chitayat, David; Atenafu, Eshetu G; Kaiser, Amy

    2011-10-01

    Compassion fatigue (CMF) arises as a consequence of secondary exposure to distress and can be elevated in some health practitioners. Locus of control and dispositional optimism are aspects of personality known to influence coping style. To investigate whether these personality traits influence CMF risk, we surveyed 355 genetic counselors about their CMF, locus of control orientation, and degree of dispositional optimism. Approximately half of respondents reported they experience CMF; 26.6% had considered leaving their job due to CMF symptoms. Mixed-method analyses revealed that genetic counselors having an external locus of control and low optimism were at highest risk for CMF. Those at highest risk experienced moderate-to-high burnout, low-to-moderate compassion satisfaction, and tended to rely on religion/spirituality when coping with stress. CMF risk was not influenced by years in practice, number of genetic counselor colleagues in the workplace, or completion of graduate training in this area. Recommendations for practice and education are outlined.

  20. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less

  1. Statistical efficiency of adaptive algorithms.

    PubMed

    Widrow, Bernard; Kamenetsky, Max

    2003-01-01

    The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS algorithm has enjoyed very widespread application. It is used in almost every modem for channel equalization and echo cancelling. Furthermore, it is related to the famous backpropagation algorithm used for training neural networks.

  2. Artificial Intelligence Based Control Power Optimization on Tailless Aircraft. [ARMD Seedling Fund Phase I

    NASA Technical Reports Server (NTRS)

    Gern, Frank; Vicroy, Dan D.; Mulani, Sameer B.; Chhabra, Rupanshi; Kapania, Rakesh K.; Schetz, Joseph A.; Brown, Derrell; Princen, Norman H.

    2014-01-01

    Traditional methods of control allocation optimization have shown difficulties in exploiting the full potential of controlling large arrays of control devices on innovative air vehicles. Artificial neutral networks are inspired by biological nervous systems and neurocomputing has successfully been applied to a variety of complex optimization problems. This project investigates the potential of applying neurocomputing to the control allocation optimization problem of Hybrid Wing Body (HWB) aircraft concepts to minimize control power, hinge moments, and actuator forces, while keeping system weights within acceptable limits. The main objective of this project is to develop a proof-of-concept process suitable to demonstrate the potential of using neurocomputing for optimizing actuation power for aircraft featuring multiple independently actuated control surfaces. A Nastran aeroservoelastic finite element model is used to generate a learning database of hinge moment and actuation power characteristics for an array of flight conditions and control surface deflections. An artificial neural network incorporating a genetic algorithm then uses this training data to perform control allocation optimization for the investigated aircraft configuration. The phase I project showed that optimization results for the sum of required hinge moments are improved by more than 12% over the best Nastran solution by using the neural network optimization process.

  3. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  4. A novel approach to enhance ACL injury prevention programs.

    PubMed

    Gokeler, Alli; Seil, Romain; Kerkhoffs, Gino; Verhagen, Evert

    2018-06-18

    Efficacy studies have demonstrated decreased anterior cruciate ligament (ACL) injury rates for athletes participating in injury prevention programs. Typically, ACL injury prevention programs entail a combination of plyometrics, strength training, agility and balance exercises. Unfortunately, improvements of movement patterns are not sustained over time. The reason may be related to the type of instructions given during training. Encouraging athletes to consciously control knee movements during exercises may not be optimal for the acquisition of complex motor skills as needed in complex sports environments. In the motor learning domain, these types of instructions are defined as an internal attentional focus. An internal focus, on one's own movements results in a more conscious type of control that may hamper motor learning. It has been established in numerous studies that an external focus of attention facilitates motor learning more effectively due to the utilization of automatic motor control. Subsequently, the athlete has more recourses available to anticipate on situations on the field and take appropriate feed forward directed actions. The purpose of this manuscript was to present methods to optimize motor skill acquisition of athletes and elaborate on athletes' behavior.

  5. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  6. Evidence for Intensive Aphasia Therapy: Consideration of Theories From Neuroscience and Cognitive Psychology.

    PubMed

    Dignam, Jade K; Rodriguez, Amy D; Copland, David A

    2016-03-01

    Treatment intensity is a critical component to the delivery of speech-language pathology and rehabilitation services. Within aphasia rehabilitation, however, insufficient evidence currently exists to guide clinical decision making with respect to the optimal treatment intensity. This review considers perspectives from 2 key bodies of research, the neuroscience and cognitive psychology literature, with respect to the scheduling of aphasia rehabilitation services. Neuroscience research suggests that intensive training is a key element of rehabilitation and is necessary to achieve functional and neurologic changes after a stroke occurs. In contrast, the cognitive psychology literature suggests that optimal long-term learning is achieved when training is provided in a distributed or nonintensive schedule. These perspectives are evaluated and discussed with respect to the current evidence for treatment intensity in aphasia rehabilitation. In addition, directions for future research are identified, including study design, methods of defining and measuring treatment intensity, and selection of outcome measures in aphasia rehabilitation. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  7. Prediction of Contact Fatigue Life of Alloy Cast Steel Rolls Using Back-Propagation Neural Network

    NASA Astrophysics Data System (ADS)

    Jin, Huijin; Wu, Sujun; Peng, Yuncheng

    2013-12-01

    In this study, an artificial neural network (ANN) was employed to predict the contact fatigue life of alloy cast steel rolls (ACSRs) as a function of alloy composition, heat treatment parameters, and contact stress by utilizing the back-propagation algorithm. The ANN was trained and tested using experimental data and a very good performance of the neural network was achieved. The well-trained neural network was then adopted to predict the contact fatigue life of chromium alloyed cast steel rolls with different alloy compositions and heat treatment processes. The prediction results showed that the maximum value of contact fatigue life was obtained with quenching at 960 °C, tempering at 520 °C, and under the contact stress of 2355 MPa. The optimal alloy composition was C-0.54, Si-0.66, Mn-0.67, Cr-4.74, Mo-0.46, V-0.13, Ni-0.34, and Fe-balance (wt.%). Some explanations of the predicted results from the metallurgical viewpoints are given. A convenient and powerful method of optimizing alloy composition and heat treatment parameters of ACSRs has been developed.

  8. Hybrid feature selection for supporting lightweight intrusion detection systems

    NASA Astrophysics Data System (ADS)

    Song, Jianglong; Zhao, Wentao; Liu, Qiang; Wang, Xin

    2017-08-01

    Redundant and irrelevant features not only cause high resource consumption but also degrade the performance of Intrusion Detection Systems (IDS), especially when coping with big data. These features slow down the process of training and testing in network traffic classification. Therefore, a hybrid feature selection approach in combination with wrapper and filter selection is designed in this paper to build a lightweight intrusion detection system. Two main phases are involved in this method. The first phase conducts a preliminary search for an optimal subset of features, in which the chi-square feature selection is utilized. The selected set of features from the previous phase is further refined in the second phase in a wrapper manner, in which the Random Forest(RF) is used to guide the selection process and retain an optimized set of features. After that, we build an RF-based detection model and make a fair comparison with other approaches. The experimental results on NSL-KDD datasets show that our approach results are in higher detection accuracy as well as faster training and testing processes.

  9. Optimizing Blasting’s Air Overpressure Prediction Model using Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Air overpressure (AOp) resulting from blasting can cause damage and nuisance to nearby civilians. Thus, it is important to be able to predict AOp accurately. In this study, 8 different Artificial Neural Network (ANN) were developed for the purpose of prediction of AOp. The ANN models were trained using different variants of Particle Swarm Optimization (PSO) algorithm. AOp predictions were also made using an empirical equation, as suggested by United States Bureau of Mines (USBM), to serve as a benchmark. In order to develop the models, 76 blasting operations in Hulu Langat were investigated. All the ANN models were found to outperform the USBM equation in three performance metrics; root mean square error (RMSE), mean absolute percentage error (MAPE) and coefficient of determination (R2). Using a performance ranking method, MSO-Rand-Mut was determined to be the best prediction model for AOp with a performance metric of RMSE=2.18, MAPE=1.73% and R2=0.97. The result shows that ANN models trained using PSO are capable of predicting AOp with great accuracy.

  10. Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.

  11. Impact of In-Service Training and Staff Development on Workers' Job Performance and Optimal Productivity in Public Secondary Schools in Osun State, Nigeria

    ERIC Educational Resources Information Center

    Fejoh, Johnson; Faniran, Victoria Loveth

    2016-01-01

    This study investigated the impact of in-service training and staff development on workers' job performance and optimal productivity in public secondary schools in Osun State, Nigeria. The study used the ex-post-facto research design. Three research questions and three hypotheses were generated and tested using questionnaire items adapted from…

  12. AI in Training (1980-2000): Foundation for the Future or Misplaced Optimism?

    ERIC Educational Resources Information Center

    Welham, David

    2008-01-01

    Since the beginning of the use of technology to support training and learning there has always been the belief that such new technologies would be able to add value either by reducing costs or increasing effectiveness. The 1980s and early 1990s were a period of enormous optimism as to the promise that such technology could bring. The governments…

  13. Bamboo Classification Using WorldView-2 Imagery of Giant Panda Habitat in a Large Shaded Area in Wolong, Sichuan Province, China.

    PubMed

    Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia

    2016-11-22

    This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k -nearest neighbor ( k -NN) method produced the greatest accuracy. A geostatistically-weighted k -NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer's and user's accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas.

  14. Online colour training system for dental students: a comprehensive assessment of different training protocols.

    PubMed

    Liu, M; Chen, L; Liu, X; Yang, Y; Zheng, M; Tan, J

    2015-04-01

    The purpose of this study was to evaluate the training effect and to determine the optimal training protocol for a recently developed online colour training system. Seventy students participated in the evaluation. They first completed a baseline test with shade guides (SGT) and the training system (TST), and then trained with one of the three system training methods (Basic colour training for group E1, Vitapan Classical for E2, and Vitapan 3D-Master for E3) or shade guides (group C1) for 4 days. The control group (C2) received no training. The same test was performed after training and they finally completed a questionnaire. The correct matches after training increased in three experimental groups and group C1. Among experimental groups, the greatest improvement of correct matching number was achieved by group E3 (4·00 ± 1·88 in SGT, 4·29 ± 2·73 in TST), followed by E2 (2·29 ± 2·73 in SGT, 3·50 ± 3·03 in TST) and E1 (2·00 ± 2·60 in SGT, 1·93 ± 2·96 in TST). The difference between E3 and E1 was statistically significant (P = 0·036 in SGT, 0·026 in TST). The total average training time was shorter in group E2 (15·39 ± 4·22 min) and E3 (17·63 ± 5·22 min), with no significant difference between them. Subjective evaluations revealed that self-confidence in colour matching were improved greater in group C1 and E3. In conclusion, all tested sections of the system effectively improved students' colour-matching ability. Among system training methods, Vitapan 3D-Master showed the best performance; it enabled greater shade-matching improvement, it saved time and was superior in subjective evaluations. © 2014 John Wiley & Sons Ltd.

  15. Rapid differentiation of Ghana cocoa beans by FT-NIR spectroscopy coupled with multivariate classification

    NASA Astrophysics Data System (ADS)

    Teye, Ernest; Huang, Xingyi; Dai, Huang; Chen, Quansheng

    2013-10-01

    Quick, accurate and reliable technique for discrimination of cocoa beans according to geographical origin is essential for quality control and traceability management. This current study presents the application of Near Infrared Spectroscopy technique and multivariate classification for the differentiation of Ghana cocoa beans. A total of 194 cocoa bean samples from seven cocoa growing regions were used. Principal component analysis (PCA) was used to extract relevant information from the spectral data and this gave visible cluster trends. The performance of four multivariate classification methods: Linear discriminant analysis (LDA), K-nearest neighbors (KNN), Back propagation artificial neural network (BPANN) and Support vector machine (SVM) were compared. The performances of the models were optimized by cross validation. The results revealed that; SVM model was superior to all the mathematical methods with a discrimination rate of 100% in both the training and prediction set after preprocessing with Mean centering (MC). BPANN had a discrimination rate of 99.23% for the training set and 96.88% for prediction set. While LDA model had 96.15% and 90.63% for the training and prediction sets respectively. KNN model had 75.01% for the training set and 72.31% for prediction set. The non-linear classification methods used were superior to the linear ones. Generally, the results revealed that NIR Spectroscopy coupled with SVM model could be used successfully to discriminate cocoa beans according to their geographical origins for effective quality assurance.

  16. Finding Risk Groups by Optimizing Artificial Neural Networks on the Area under the Survival Curve Using Genetic Algorithms.

    PubMed

    Kalderstam, Jonas; Edén, Patrik; Ohlsson, Mattias

    2015-01-01

    We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN) are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox) is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart), which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart's predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do.

  17. Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.

    PubMed

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2018-04-01

    The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have similar levels of performance in the remaining aspects.

  18. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition

    PubMed Central

    Sánchez, Daniela; Melin, Patricia

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition. PMID:28894461

  19. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition.

    PubMed

    Sánchez, Daniela; Melin, Patricia; Castillo, Oscar

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.

  20. Optimizing the Long-Term Operating Plan of Railway Marshalling Station for Capacity Utilization Analysis

    PubMed Central

    Zhou, Wenliang; Yang, Xia; Deng, Lianbo

    2014-01-01

    Not only is the operating plan the basis of organizing marshalling station's operation, but it is also used to analyze in detail the capacity utilization of each facility in marshalling station. In this paper, a long-term operating plan is optimized mainly for capacity utilization analysis. Firstly, a model is developed to minimize railcars' average staying time with the constraints of minimum time intervals, marshalling track capacity, and so forth. Secondly, an algorithm is designed to solve this model based on genetic algorithm (GA) and simulation method. It divides the plan of whole planning horizon into many subplans, and optimizes them with GA one by one in order to obtain a satisfactory plan with less computing time. Finally, some numeric examples are constructed to analyze (1) the convergence of the algorithm, (2) the effect of some algorithm parameters, and (3) the influence of arrival train flow on the algorithm. PMID:25525614

Top